Add Journal to My Library
Journal of Financial Econometrics
, Volume Advance Article – Jun 4, 2018

25 pages

/lp/ou_press/likelihood-inference-for-a-cogarch-process-using-sequential-monte-xSmqPXwaXa

- Publisher
- Oxford University Press
- Copyright
- © The Authors (2018). Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com
- ISSN
- 1479-8409
- eISSN
- 1479-8417
- D.O.I.
- 10.1093/jjfinec/nby012
- Publisher site
- See Article on Publisher Site

Abstract The likelihood for discretely observed continuous-time GARCH models (Kluppelberg, Lindner, and Maller, 2004) is intractable and existing methods based on generalized estimating equations or quasi likelihood are either inefficient or biased. We present a computationally feasible method of obtaining a smooth approximation to the likelihood surface, which can be optimized to give parameter estimates. Our approach combines sequential Monte Carlo (SMC) with a continuous resampling step, and applies to both regularly and irregularly spaced data. The use of SMC is made possible through a novel state space representation of the continuous-time GARCH model. The proposed methodology is illustrated with analysis of high-frequency financial data. Our method is found through simulation studies to outperform the existing quasi maximum likelihood method. 1 Introduction The COGARCH model was introduced in Kluppelberg, Lindner, and Maller (2004) as a continuous-time model for stochastic volatility. To arrive at the COGARCH model, the univariate noise series of a discrete time GARCH model is replaced by the increments of a Lévy process. The resulting COGARCH model is then a continuous-time analog of the discrete time GARCH model, both being systems driven by only a single source of noise and both incorporating a feedback mechanism whereby a perturbation in the current period continues to affect the variance of future perturbations. Calibration of the COGARCH model to data is still an open topic. The currently available methods can be grouped into three categories. The first group employs the use of theoretical moments. Haug et al. (2007) equates theoretical moments with sample-based estimates, while Bibbona and Negri (2015) use theoretical moments to construct prediction-based estimating functions. Both these procedures, however, have only been demonstrated under the assumption of equally spaced observation. To extend to irregularly spaced data requires overcoming the problem that the moments needed for both these methods depend on the time spacings between observations. The second is the quasi maximum likelihood (QML) method, which maximizes a quasi likelihood in place of the true likelihood. First suggested for COGARCH models in Maller, Muller, and Szimayer (2008), this method is based on a first jump approximation to the driving Lévy process and an assumption that the observed increments are Gaussian and conditionally independent. These simplifying assumptions lead to construction of a tractable likelihood very similar to that used for fitting GARCH models to discrete time data. The QML method can be applied to regularly and irregularly spaced observations. However, as pointed out by Muller (2010), the QML estimator is not consistent in general. Although Kim and Lee (2013) established the consistency and asymptotic normality of a QML estimator based on a slightly different form of quasi likelihood, they assumed an asymptotic scenario where the observation times grow infinitely dense, which is rather unrealistic from a practical point of view. Marin, Rodriguez-Bernal, and Romero (2015) uses a data cloning technique to explore the quasi likelihood surface by Markov chain Monte Carlo (MCMC) methods. The third is the MCMC Bayesian procedure developed by Muller (2010), for the specific case of a compound Poisson driving Lévy process. It was reported that the posterior mode estimator outperformed the QML estimator in simulation studies. However, this MCMC procedure is computationally very expensive as the procedure attempts to uncover the complete sample path of the underlying compound Poisson process. Moreover, the choices of the prior distributions for the parameters and hyper-parameters influence the performance of the estimators and there does not seem to be a principled approach to making these choices. Inferences for the COGARCH model based on the genuine likelihood, however, have not been addressed, although as a general inference methodology, the maximum likelihood method is expected to have various advantages such as being applicable both to regularly spaced and to irregularly spaced data, ease of variance estimation, and asymptotic efficiency. The challenge in making inferences based on the genuine likelihood in COGARCH models is the computation of the likelihood function, as the likelihood lacks a tractable form and its computation entails the evaluation of infinite-dimensional integrals. Development of a computationally feasible method for maximum likelihood estimation in the COGARCH model is the concern of this article. Since in many high-frequency financial data sets, especially intraday data, time intervals without price changes are often observed, the model based on finite activity pure jump processes, or the compound Poisson process, seems more fitting. Therefore, as in Muller (2010), in this work we consider only the COGARCH model driven by a compound Poisson process. Through a novel state space representation of such a COGARCH process, we will demonstrate that the sequential Monte Carlo (SMC) technique can be applied with the bootstrap particle filter (Gordon, Salmond, and Smith, 1993) to obtain an unbiased estimate of the likelihood of the compound Poisson COGARCH process based on either regularly or irregularly spaced discrete observations. The approximated likelihood function by the bootstrap particle filter is not continuous in the model parameters in general, even if the randomness in the Monte Carlo simulation is controlled. Stochastic equicontinuity, a standard condition in the asymptotic theory of simulation-based estimators (see McFadden (1989), Gourieroux and Monfort (1996)) is difficult to establish without continuity of the simulated likelihood in the model parameters. Moreover, the lack of continuity creates difficulty in numerically optimizing the likelihood function to obtain the maximum likelihood estimator (MLE). To overcome this issue, we use the continuous resampling procedure developed by Pitt and Malik (2011) to produce a smooth likelihood surface, which is amenable to numerical optimization. Unlike the MCMC procedure of Muller (2010) that essentially attempts to fill in the missing information, the SMC procedure effectively integrates out the missing information. We demonstrate superior finite sample performance in comparison with the QML method through simulations. This article is organized as follows. A brief description of the COGARCH process is provided in Section 2. The intractable likelihood of a COGARCH process based on discrete observations is detailed in Section 3. A means to obtain an unbiased estimate of the likelihood through SMC is presented in Section 4. A procedure to construct a continuous approximation to the likelihood surface to perform maximum likelihood estimation is outlined in Section 5. A real-world application of the procedure is illustrated on high-frequency intraday financial data in Section 6. Simulation studies are conducted in Section 7. The article concludes in Section 8 with a discussion. 2 The COGARCH(1, 1) Model The COGARCH(1, 1) model introduced in Kluppelberg, Lindner, and Maller (2004) is the following system of stochastic differential equations dGt=σtdLt,dσt+2=(β−ησt2)dt+ϕσt2d[L,L]t(d), (1) where Lt is a Lévy process, β>0, η>0 and ϕ>0 are model parameters, and [L,L](d) denotes the discrete quadratic variation of the Lévy process (cf. Protter (2005: 66). Here and hereafter, t± indicates the right/left-hand limit at t. For a finite activity pure jump Lévy process, if we denote Nt as the number of jumps on the interval [0,t], τ1,…,τNt as the ordered jump times and z1,…,zNt as the corresponding i.i.d. (independent and identically distributed) jump sizes, then the COGARCH(1, 1) process, with an initial volatility value σ0, can be represented as Gt=G0+∫0tσsdLs=G0+∑i=1Ntστizi, where the skeleton volatility process {στi} is given recursively by στj2=βη(1−e−η(τj−τj−1))+e−η(τj−τj−1)στj−1+2, (2) στj+2=στj2(1+ϕzτj2),j=1,…,Nt, and στ0+2:=σ02. (3) We see from the representation above that β/η bounds the squared volatility process from below, and can be interpreted as a baseline value that the squared volatility decays to in the absence of jumps. The parameter η determines the speed of this decay, with ϕ controlling the degree of impact new jumps have on the volatility. For parameter identifiability, it is common, such as was done in Haug et al. (2007) and Maller, Muller, and Szimayer (2008), to standardize the mean and variance per unit time of the driving Lévy process to be 0 and 1 respectively (i.e., E(L1)=0 and E(L12)=1), which we assume hereafter. The initial value of the squared volatility is often set at its long-term mean σ02=β/(η−ϕ) (see Proposition 4.2 of Kluppelberg, Lindner, and Maller (2004)). 3 Likelihood Based on Discrete Observations of the COGARCH Process Assume the driving Lévy process is compound Poisson with arrival rate λ and a jump size distribution with density q(·) relative to Lebesgue measure μ. Note that this implies the jump size distribution has no point mass at 0. If one observes the complete sample path of the process Gt on the observation interval [0,T], or equivalently the number NT of jumps of the process in the interval together with the jump times τ1,…,τNT and the respective jump sizes gi=Gτi−Gτi−, then the likelihood of the model parameters θ=(β,η,ϕ,λ), is the density function of the distribution of the compound Poisson COGARCH process, interpreted as a function of the parameter vector θ, given by likc(θ)=λNTe−λT∏i=1NT1στiq(giστi), where the στi‘s depend on the parameters θ and the initial volatility σ02=β/(η−ϕ) through the recursions (2)–(3), with the zi in (3) replaced by gi/στi. The parameter space is Θ={(β,η,ϕ,λ):β>0,ϕ>0,η>ϕ,λ>0}. While the complete data likelihood is easy to compute and is a smooth function of the parameter vector, in practice we do not normally have continuous observations of the COGARCH process. The typical observations we have are the values of the process at a set of discrete time points 0=t0<t1<…<tn=T. It is assumed that the observation times ti,i=1,…,n are independent of the COGARCH process and their choice does not depend on the parameters of the COGARCH model. Assume the COGARCH process has a known initial value G0, then the observations are equivalent to the observed increments of G over successive observation times, Yi=Gti−Gti−1, i=1,…,n. Since each Yi has a positive probability to be 0 (due to the compound Poisson Lévy process), we define the likelihood to be the density pθ(Y1:n) of the observations (Y1,…,Yn)=:Y1:n with respect to the product measure (δ0+μ)⊗n, interpreted as a function of the parameter vector θ. Here, and hereafter, we use δx to denote the Dirac measure at x. Formally, the likelihood can be written in the following form, lik(θ)=pθ(Y1:n)=∏i=1npθ(Yi|Y1:i−1)=∏i=1ne−λΔtiI(Yi=0){(1−e−λΔti)∫1στNtiq(Gti−GτNti−στNti)P(dGτNti−,dστNti|Y1:i−1)}I(Yi≠0), (4) where Δti=ti−ti−1, P(dGτNti−,dστNti|Y1:i−1) denotes the conditional joint distribution of GτNti− and στNti given the observations Y1:i−1. Recall that, when Yi≠0, there is at least one jump in the interval (ti−1,ti] and GτNti− represents the value of the process G just before the last jump time in the interval, Gti−GτNti− the size of the last jump of G in the interval, and στNti2 the volatility just before the last jump in the interval. The conditional distributions P(dGτNti−,dστNti|Y1:i−1) in (4) are intractable in general, and therefore the likelihood cannot be directly computed. We will show in the next section how the SMC technique can be applied to obtain an unbiased estimate of the likelihood function. 4 SMC Approximation of the Likelihood In this section, we outline the procedure of obtaining an unbiased estimate of the COGARCH likelihood. To begin, we shall first formulate the observed increments of the COGARCH process as the observations of a hidden Markov process model. While there are various ways to do this, we do it in such a way that the resulting model is suitable for SMC implementation. 4.1 Hidden Markov Process Representation of the COGARCH Model with Discrete Observations For notational convenience, let ni be the number of jumps of the COGARCH process that occur in the interval (ti−1,ti], ti−1<τi,1<…<τi,ni<ti be the ni jump times, and zi,1,…,zi,ni the corresponding ni jump sizes of the underlying compound Poisson process. That is, ni=Nti−Nti−1 and τi,j=τNti−1+j, zi,j=zNti−1+j, j=1,…,ni. When ni = 0 we define τi,0:=ti−1. Recall that Yi=Gti−Gti−1=∑j=1niστi,jzi,j, (5) where the στi,j, j=1,…,ni are recursively given as in (2) and (3), with initial value στi,0+2=σti−12, and the sum is interpreted as 0 when ni<1. Let Xi=(σti−12,∑j=1ni−1στi,jzi,j,στi,ni2,τi,ni). (6) Let Xi,j denote the j-th element of Xi. Then Xi,1 is the volatility at the beginning of the i-th observation interval, Xi,2 is the increment of the COGARCH process in the interval before the last jump time in the interval, Xi,3 is the volatility at the last jump time, and Xi,4 is either the last jump time or the beginning of the observation interval, depending on whether ni>0 or ni = 0. Note that the distribution of Xi,2:4 is fully determined once Xi,1 is given. Note also from (5) that the distribution of Yi is fully determined by Xi,2:4 through Yi={0,Xi,4=ti−1⇔ni=0,Xi,2+Xi,3zi,ni,Xi,4>ti−1⇔ni>0. (7) By the model specification (1), Xi,1 is a deterministic function of (Xi−1,Yi−1) through Xi,1=βη+e−η(ti−1−Xi−1,4)(Xi−1,3+ϕ(Yi−1−Xi−1,2)2−βη)=:g(Xi−1,Yi−1,ti−1). (8) With this formulation, the Yi can be regarded as (imperfect) observations of the hidden state sequence Xi. Since Xi,1 is Markovian (see Theorem 3.2 of Kluppelberg, Lindner, and Maller (2004)), so is Xi, and therefore we have a hidden Markov model. However, it should be noted that the current model is different from the typical hidden Markov model in at least two ways. The first is that in the current model the observations feed into the state evolutions, while in the typical hidden Markov models they do not (Figure 1). The second is that the observation noise in our model has a distribution that is a mixture of the (continuous) jump distribution and the degenerate distribution located at 0, while in typical hidden Markov models the noise distribution is either continuous or discrete. In the next section, we shall see that these differences do not cause essential difficulties in implementing the SMC procedure. Now note that with the above formulation, the conditional density of Yi given Xi, relative to the measure δ0+μ, has this explicit form, p(Yi|Xi)={I(Yi=0),Xi,4=ti−1,1Xi,3q(Yi−Xi,2Xi,3)I(Yi≠0),Xi,4>ti−1, (9) where we recall that q is the μ-density of the jump sizes of the underlying compound Poisson process. The ease of evaluating (9) is a big advantage of our choice of the state variable Xi over other possible choices. For instance, if the state variable is chosen as σti−12, then the resulting p(Yi|Xi) does not admit an analytical form and is very difficult to evaluate, although we still have a hidden Markov model. As will be revealed shortly, the SMC procedure requires p(Yi|Xi) to be frequently evaluated and thereby it is with the state representation (6) that a computationally feasible implementation is made possible. Finally, we also note that, with our choice of the state variable, even without its first dimension, Xi is still a Markov process and the conditional density p(Yi|Xi) takes the same form as (9). However, we keep the first dimension of Xi for convenience when implementing the SMC procedure presented later. Figure 1. View largeDownload slide Illustration of the typical (left) and the current (right) hidden Markov model structures. Figure 1. View largeDownload slide Illustration of the typical (left) and the current (right) hidden Markov model structures. Algorithm 1 SMC Log-Likelihood Estimation Bootstrap Filter 1: Procedure 2: Initialisation: 3: X0,31:N←βη−ϕ 4: loglik←Y0←t0←X0,21:N←X0,41:N←κ←0 5: Set Seed to fix randomness 6: Loop: 7: for i in 1:n do 8: if Yi≠0 then 9: loglik←loglik+log(1−e−λΔti) 10: for j in 1:N do 11: if κ>0 then 12: Sample ν from the set {1,…N}according to the probabilities {wκ1,…wκN} 13: else 14: ν←j 15: Xi,1j←βη+(Xκ,3ν+ϕ(Yκ−Xκ,2ν)2−βη)e−η(ti−1−Xκ,4ν) 16: Simulate U1∼Uniform(0,1) 17: τi,1j←ti−1−1λlog(1−(1−e−λΔti)U1) 18: Xi,2j←0 19: Xi,3j←βη+(Xi,1j−βη)e−η(τi,1j−ti−1) 20: Simulate r∼Poisson(λ(ti−τi,1j)) 21: if r > 0 then 22: Simulate z1,…,zr∼i.i.d random variables with density q 23: Simulate U2<⋯<Ur+1 as i.i.d standard uniforms sorted in ascending order 24: for s in 1:r do 25: τi,s+1j←τi,1j+Us+1(ti−τi,1j) 26: Xi,2j←Xi,2j+Xi,3jzs 27: Xi,3j←βη+(Xi,3j(1+ϕzs2)−βη)e−η(τi,s+1j−τi,sj) 28: Xi,4j←τi,r+1j 29: wij←1Xi,3jq(Yi−Xi,2jXi,3j) 30: loglik←loglik+log(1N∑j=1Nwij) 31: Normalise wij←wij∑j=1Nwij 32: κ←i 33: else 34: loglik←loglik−λΔti 4.2 SMC Approximation of the Likelihood To introduce the SMC approximation to the likelihood function, we first write the likelihood as follows, lik(θ)=∏i=1np(Yi|Y1:i−1)=∏i=1n∫p(Yi|Xi)P(dXi|Y1:i−1), (10) with p(Yi|Y1:i−1) and P(dXi|Y1:i−1) interpreted as p(Y1) and P(dX1) respectively when i=1. Note that since P(Xi,4=ti−1|Y1:i−1)=P(ni=0)=e−λΔti and P(Xi,4>ti−1|Y1:i−1)=P(ni>0)=1−e−λΔti we have, P(dXi|Y1:i−1)=e−λΔtiP(dXi|Y1:i−1,Xi,4=ti−1)+(1−e−λΔti)P(dXi|Y1:i−1,Xi,4>ti−1). (11) Then as {Xi,4=ti−1} implies {Yi=0} and {Xi,4>ti−1} implies {Yi≠0} we have, ∫p(Yi|Xi)P(dXi|Y1:i−1)={e−λΔti,Yi=0,(1−e−λΔti)∫p(Yi|Xi)P(dXi|Y1:i−1,Xi,4>ti−1),Yi≠0. (12) In general the conditional distributions P(dXi|Y1:i−1,Xi,4>ti−1) are intractable, which hinders direct evaluation of (10). Thus, we resort to a Monte Carlo approach, by first simulating an i.i.d. random sample Xi1:N={Xi1,…,XiN}, called particles, from an approximation to P(dXi|Y1:i−1,Xi,4>ti−1), denoted by P˜(dXi|Y1:i−1,Xi,4>ti−1), to be described below. Then secondly, replacing P(dXi|Y1:i−1,Xi,4>ti−1) by the empirical distribution of the particles P^(dXi|Y1:i−1,Xi,4>ti−1)=1N∑j=1NδXij(dXi), to approximate the integrals with respect to the conditional distributions as follows ∫p(Yi|Xi)P(dXi|Y1:i−1,Xi,4>ti−1)≈∫p(Yi|Xi)P^(dXi|Y1:i−1,Xi,4>ti−1)=1N∑j=1Np(Yi|Xij). (13) Now the SMC approximation of the likelihood is given by lik^(θ)=∏i=1n(I(Yi=0)e−λΔti+I(Yi≠0)(1−e−λΔti)1N∑j=1Np(Yi|Xij)). (14) At this point, we emphasize that in each observation interval with Yi≠0, two approximations to the conditional distribution P(dXi|Y1:i−1,Xi,4>ti−1) are used: the “tilde” version P˜(dXi|Y1:i−1,Xi,4>ti−1) is used to generate particles, while the “hat” version P^(dXi|Y1:i−1,Xi,4>ti−1) is used both to approximate the integrals in (13), and to produce the “tilde” version in the next time interval where it is needed, as we shall explain next. To present the method to generate particles from P˜(dXi|Y1:i−1,Xi,4>ti−1), we first note that particle generation is needed only in observation intervals with Yi≠0. Let κm:=min{k:∑j=1kI(Yj≠0)=m} denote the index of the m-th such interval, for m=1,2,…. Assume i=κm for some m. Our choice of the state variable in the hidden Markov process implies that P(dXi|Y1:i−1,Xi,4>ti−1)=∫P(dXi|Xi,1=g(Xκm−1,Yκm−1,ti−1),Xi,4>ti−1)P(dXκm−1|Y1:κm−1), (15) where recall g(Xκm−1,Yκm−1,ti−1)=β/η+e−η(ti−1−Xκm−1,4)(Xκm−1,3+ϕ(Yκm−1−Xκm−1,2)2−β/η) with the convention Xκ01:N≡(σ02,0,σ02,0), Yκ0:=0 and t0:=0. This motivates the “tilde” version approximation to P(dXi|Y1:i−1,Xi,4>ti−1), P˜(dXi|Y1:i−1,Xi,4>ti−1)=∫P(dXi|Xi,1=g(Xκm−1,Yκm−1,ti−1),Xi,4>ti−1)P^(dXκm−1|Y1:κm−1), (16) where the posterior P^(dXκm−1|Y1:κm−1), for m > 1, is determined by the particles from the previous time step κm−1 through P(dXκm−1|Y1:κm−1)=P(dXκm−1|Y1:κm−1,Xκm−1,4>tκm−1−1)=p(Yκm−1|Xκm−1)P(dXκm−1|Y1:(κm−1−1),Xκm−1,4>tκm−1−1)∫p(Yκm−1|Xκm−1)P(dXκm−1|Y1:(κm−1−1),Xκm−1,4>tκm−1−1)≈∑j=1Np(Yκm−1|Xκm−1j)δXκm−1j(dXκm−1)∑j=1Np(Yκm−1|Xκm−1j)=:P^(dXκm−1|Y1:κm−1). (17) Define wmj:=p(Yκm|Xκmj)/∑k=1Np(Yκm|Xκmk) for j=1,…,N, then combining (16) and (17) gives P˜(dXi|Y1:i−1,Xi,4>ti−1)=∑j=1NP(dXi|Xi,1=g(Xκm−1j,Yκm−1,ti−1),Xi,4>ti−1)wm−1j. (18) To simulate a particle according to (18), one would first sample an index ν from the set {1,…,N} according to the probabilities {wm−11,…,wm−1N} (termed bootstrap resampling), then simulate an Xi according to P(dXi|Xi,1=g(Xκm−1ν,Yκm−1,ti−1),Xi,4>ti−1). Given Xi,1 and the condition Xi,4>ti−1, the remaining dimensions Xi,2:4 are obtained by simulating the following components of the compound Poisson process: the number of jumps in the interval ni (conditional that ni≥1), the times at which these jumps occurred ti−1<τi,1<…<τi,ni<ti, and the size of these jumps (except for the last jump) zi,1,…,zi,ni−1. Define ui=τi,1−ti−1, then ui follows a truncated exponential distribution with rate λ whereby P(ui<t)=1−e−λt1−e−λΔti, for t<Δti. Thus, one can simulate a Uniform(0, 1) random variable U1 to obtain τi,1=ti−1−1λlog(1−(1−e−λΔti)U1). (19) Then one can simulate a Poisson random variable r with mean λ(ti−τi,1) to obtain the number of jumps, conditional that at least one jump occurred, as ni=1+r. Proceeding, simulate r i.i.d Uniform(0, 1) random variables, denote by U2,…,Uni the sorted uniforms, and then obtain τi,s=τi,1+Us(ti−τi,1),s=2,…,ni. (20) Finally, the zi,1,…,zi,ni−1 are r i.i.d simulates from the density q. This completes the simulation of all the required components to generate a particle. In practice, to obtain the parameter estimates, the log of the approximated likelihood loglik̂(θ)=∑i=1n(−λΔtiI(Yi=0)+I(Yi≠0){log(1−e−λΔti)+log(1N∑j=1Np(Yi|Xij))}) (21) is maximized instead of (14). The pseudo code summarizing the above steps for obtaining loglik^(θ) is provided in Algorithm 1. 5 Estimating Parameters The aim is to estimate the COGARCH parameters β,η,ϕ and the jump intensity λ. Recall, for parameter identifiability, E(L1)=0 and E(L12)=1. In this way, the variance of the jump distribution will be set as the inverse of the jump rate λ. 5.1 Estimating λ Taking advantage of the fact that in a given interval [ti−1,ti] the probability that no jumps occurs is e−λΔti and the probability that at least one jump occurred is 1−e−λΔti we first estimate λ by maximizing the partial likelihood L(λ)=∏i=1ne−λΔtiI(Yi=0)(1−e−λΔti)I(Yi≠0). (22) In practice, the high-frequency data sets usually associated with COGARCH modeling are sufficiently large for (22) to yield very precise estimates of λ. 5.2 Estimating β, η, and ϕ Fixing randomness and evaluating (14) at different values of β, η, and ϕ will not generally lead to the construction of a smooth likelihood surface. Using bootstrap resampling, discontinuities arise because at each time step i, such that Yi≠0, the particles Xij are resampled from a discontinuous empirical cumulative distribution function (ECDF) F^N(x)=P^(Xi≤x|Y1:i)=∑j=1NP^(Xij|Y1:i)I(Xij≤x), (23) where I(.) is the indicator function. Figure 2a illustrates how a small change in parameters can lead to a large change in the particles sampled. While both the vertical and horizontal shifts in the ECDF are continuous in θ, by inverting the ECDF and sampling Xi from a fixed set of uniform random variables, one cannot guarantee that the whole set of sampled particles Xi1:N is continuous in θ. While it is possible that the initial magnitude of the difference in the sampled particles across the parameter change is small, as the procedure is sequential in nature, the difference will be compounded in all later iterations. Figure 2. View largeDownload slide (a) The solid line step function is the ECDF under some parameter θ1 and the dashed line step function is the ECDF under some parameter θ2 very close to θ1. A small shift in parameter θ causes a small shift in the ECDF. Fixing the same uniform U and inverting the two discontinuous stepwise ECDFs, as illustrated above, can result in drastically different particles Xθ1 and Xθ2 sampled. (b) Previously, inverting the same uniform across the ECDFs under θ1 (thick solid line) and θ2 (long dashed line) could result in drastically different particles Xθ1 and Xθ2 sampled. The continuous approximation of the ECDF under θ1 is the thin solid line function and respectively under θ2 is the short dashed line function. By instead inverting these continuous approximations, large deviations between sampled particles under small parameter changes are eliminated. Figure 2. View largeDownload slide (a) The solid line step function is the ECDF under some parameter θ1 and the dashed line step function is the ECDF under some parameter θ2 very close to θ1. A small shift in parameter θ causes a small shift in the ECDF. Fixing the same uniform U and inverting the two discontinuous stepwise ECDFs, as illustrated above, can result in drastically different particles Xθ1 and Xθ2 sampled. (b) Previously, inverting the same uniform across the ECDFs under θ1 (thick solid line) and θ2 (long dashed line) could result in drastically different particles Xθ1 and Xθ2 sampled. The continuous approximation of the ECDF under θ1 is the thin solid line function and respectively under θ2 is the short dashed line function. By instead inverting these continuous approximations, large deviations between sampled particles under small parameter changes are eliminated. Figure 3. View largeDownload slide COGARCH model structure. Figure 3. View largeDownload slide COGARCH model structure. If the support of Xi is some interval of R, then Pitt and Malik (2011) propose constructing a continuous approximation F˜N(x) of F^N(x) and then resampling particles Xij by inverting uniforms based on F˜N(x). Assume the particles Xij, j=1,…,N are sorted in ascending order, then F˜N(x)=π0I(x<Xi1)+∑j=1N−1πjH(x−XijXij+1−Xij)+πNI(x≥XiN), (24) where π0=P^(Xi1|Y1:i)/2, πN=P^(XiN|Y1:i)/2 and πj=(P^(Xij+1|Y1:i)+P^(Xij|Y1:i))/2 for j=1,…N−1 and H(z):=max(0,min(z,1)). Figure 2b illustrates the idea behind the procedure (see also Figure 1 in Pitt and Malik (2011)). It was shown in Pitt and Malik (2011) that the distance ||F^N(x)−F˜N(x)||∞ is of order 1N. 5.2.1 Continuous COGARCH SMC likelihood surface Now, in our case, Xi is a four-dimensional vector, so we cannot apply the above procedure for resampling our hidden states in a continuous manner. However, as the model structure can be depicted as in Figure 3, we really only need to find a way to continuously resample the σti2 a one-dimensional quantity instead. To do this, we construct an empirical cumulative distribution of σti2 from the Xij Q^(σti2≤x|Y1:i)=∑j=1NαjI(g(Xij,Yi,ti)≤x), (25) where the weights are αj=P^(Xij|Y1:i) and σti2(j):=g(Xij,Yi,ti), with the function g(.) given in (8). Now, since the weights αj are continuous in Xi and not σti2, even if σti2(j) and σti2(j+1) are close together, this does not guarantee that the weights αj will be. To ensure continuity, we must smooth the weights. The following kernel smoothing approach was proposed in Pitt and Malik (2011) α¯j=∑k=1Nαkφ((σti2(j)−σti2(k))/h)∑k=1Nφ((σti2(j)−σti2(k))/h) (26) α˜j=α¯j∑k=1Nα¯k for j=1,…,N, (27) where φ is the standard Gaussian density and h=c/N for a very small number c. Hence, we then apply the continuous resampling procedure to the following cumulative distribution P^(σti2≤x|Y1:i)=∑j=1Nα˜jI(σti2(j)≤x). (28) Pseudo code for implementing the continuous resampling procedure is provided in Algorithm 2. Figure 4 illustrates the contrast between the jagged log-likelihood surface obtained using bootstrap resampling and the smooth log-likelihood surface obtained using continuous resampling. Figure 4. View largeDownload slide A data set with 5000 equally spaced observations of a COGARCH model with true parameters β = 1, η=0.06, and ϕ=0.0425 was generated. The above plots show the simulated log-likelihoods of this data set as β is varied (keeping η and ϕ fixed at their true values). LHS utilizes bootstrap resampling and RHS utilizes the continuous resampling procedure. Figure 4. View largeDownload slide A data set with 5000 equally spaced observations of a COGARCH model with true parameters β = 1, η=0.06, and ϕ=0.0425 was generated. The above plots show the simulated log-likelihoods of this data set as β is varied (keeping η and ϕ fixed at their true values). LHS utilizes bootstrap resampling and RHS utilizes the continuous resampling procedure. The approach is to estimate λ by maximizing (22), denote this estimate λ^, and then estimate β,η, and ϕ by maximizing the log-likelihood surface obtained using SMC with continuous resampling and noise components that are driven by λ^. The reason for this two-step approach is that unlike β,η, and ϕ the parameter λ drives the noise components of the underlying compound Poisson process and increasing/decreasing λ will add/subtract jump size random variables making it impossible to obtain a simulated likelihood surface that is smooth in λ. Additionally, a benefit of this two-step approach is that it reduces the dimensionality of optimization, which has obvious advantages. The SE for λ^ can be obtained in the usual manner from the second derivative of the log of (22) with respect to λ. For the estimates of β,η, and ϕ, the SEs that are obtained in the usual manner from the numerical Hessian of the smooth SMC log-likelihood, as recommended by Murphy and Topel (1985), require adjustment due to a two-step estimation procedure being used. As it is not possible to obtain a simulated likelihood surface that is smooth in λ this makes it difficult to obtain the partial derivatives of the likelihood with respect to λ that are required to implement the adjustment suggested by Murphy and Topel (1985). Nevertheless, SEs that account for the two-step estimation are easily obtained through simulation using the parametric bootstrap (see Efron (1982)). In future work, as an alternative means to obtain SEs, one could explore the method of Doucet, Jacob, and Rubenthaler (2015) for derivative-free estimation of observed information matrices. Algorithm 2 SMC Log-Likelihood Estimation Continuous Approximation Lines 1–9 from Algorithm 1 10: for j in 1:N do 11: if κ>0 then 12: γ1←γ2←0 13: for p in 1:N do 14: if j = 1 then 15: Xi,1p←βη+(Xκ,3p+ϕ(Yκ−Xκ,2p)2−βη)e−η(ti−1−Xκ,4p) 16: γ1←γ1+wκpφ((Xi,1j−Xi,1p)/h) 17: γ2←γ2+φ((Xi,1j−Xi,1p)/h) 18: αj←γ1/γ2 19: else 20: Xi,1j←βη+(Xκ,3j+ϕ(Yκ−Xκ,2j)2−βη)e−η(ti−1−Xκ,4j) 21: if κ>0 then 22: Normalise αj←αj/∑j=1Nαj 23: Sort {Xi,1j,αj}j=1…Nascending in Xi,1j. 24: π0←12α1 25: πN←12αN 26: Simulate U˜1<⋯<U˜N as i.i.d standard uniforms sorted in ascending order 27: s←1 28: t←π0 29: for j in 1:N do 30: if U˜j<π0 then 31: X˜i,1j=Xi,11 32: else if U˜j>1−πN then 33: X˜i,1j=Xi,1N 34: else 35: while U˜j>t do 36: πs=12(αs+αs+1) 37: t←t+πs 38: s←s+1 39: X˜i,1j=Xi,1s−1+U˜j−(t−πs−1)πs−1(Xi,1s−Xi,1s−1) 40: Set Xi,11:N=X˜i,11:N 41: for j in 1:N do Lines 16–34 from Algorithm 1 6 Empirical Illustration In this section, we use the SMC and QML procedures to model the intra-week volatility of the AUD/USD exchange rate for fifty consecutive trading weeks of 2015. The Forex market is international and a trading week commences Sunday 22: 00 GMT at the weekly opening of the financial center in Sydney, Australia and runs continuously until Friday 22: 00 GMT at the weekly close of the financial centre in New York. One-minute price data was obtained from www.histdata.com. Due to the presence of missing observations, the data are not exactly equally spaced, however a large percentage (99.6%) of observations are spaced one-minute apart, with the remainder spaced from 2 to 41 minutes apart. For each of the fifty consecutive weeks a separate COGARCH model will be fitted. The first trading week runs from the 4th of January 2015 22: 00 GMT to the 9th of January 22: 00 GMT and the last trading week runs from from the 13th of December 2015 22: 00 GMT to the 18th of December 22: 00 GMT. Previous empirical studies on foreign exchange rates have documented intra-week volatility patterns (cf. Muller et al. (1990)). The COGARCH model cannot meaningfully be applied to nonstationary data and as such; to account for intra-week hourly volatility patterns a transformation from the physical time scale to a virtual business time scale is applied. This leads to further irregular spacing in the data as trading hours with higher than normal volatility are expanded while trading hours with lower than normal volatility are compressed. This acts to balance out the volatility per unit of virtual time. Details of the data preprocessing steps are provided in Appendix A. The preprocessed datasets are available at Journal of Financial Econometrics online. 6.1 Results For each of the fifty trading weeks (see Supplementary material) a separate COGARCH model, driven by a compound Poisson process with mean zero Laplace distributed jumps, is fitted. Due to the presence of several significant isolated return spikes that are not accompanied by a subsequent increase in volatility for the next few observations, a Laplace jump size distribution, which is of heavier tail than the normal distribution, seemed appropriate. Recall for parameter identifiability we assume E(L12)=1. In this way, the variance of the jump distribution will be set as the inverse of the jump rate λ. Thus, a total of four parameters β,η,ϕ, and λ are estimated for each trading week. The jump rate λ was estimated via (22). The estimated λ specifies the parameters of the driving compound Poisson process in the SMC method, although as E(L12)=1 is fixed it is not utilized by the QML method. For the SMC procedure, N = 1000 particles were employed and this choice seemed reasonable in light of simulation evidence in Section 7. The likelihood maximization was performed using the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm as implemented in the optim package of the stats library in R. The parameter estimates obtained using SMC and QML for the 50 trading weeks are displayed in Figure 5. Unfortunately, in the current literature, diagnostics to determine which set of parameter values provide the better fit for the data have not been substantially developed. However, one could compare summary statistics such as the mean stationary volatility E(σ∞2):=βη−ϕ (see Proposition 4.2 of Kluppelberg, Lindner, and Maller (2004)) as well as the lower bound of the stationary volatility process σmin2:=βη (see Proposition 2 of Kuppelberg, Lindner, and Maller (2006)). Figure 5. View largeDownload slide Estimated COGARCH parameters for the AUD/USD exchange rate for the 50 consecutive trading weeks. The SMC parameter estimates are given by the solid line with the dotted lines giving the 95% confidence intervals. The QML parameter estimates are given by the dashed line. Figure 5. View largeDownload slide Estimated COGARCH parameters for the AUD/USD exchange rate for the 50 consecutive trading weeks. The SMC parameter estimates are given by the solid line with the dotted lines giving the 95% confidence intervals. The QML parameter estimates are given by the dashed line. An estimate of E(σ∞2) can also be obtained empirically by ∑i=1nYi2Δti (see Proposition 5.1 of Kluppelberg, Lindner, and Maller (2004)). This, along with the estimated mean stationary volatilities given by the QML and SMC, are displayed in Figure 6a. The SMC estimates of E(σ∞2) are in strong agreement with the empirical estimates, in general so too are the QML estimates, although they sometimes diverge to higher values, perhaps artifically so due to heavy tailedness in the data. The QML and SMC estimates of σmin2 are displayed in Figure 6b. Both quantities seem to track each other well, however the QML estimates are higher almost all of the time. Figure 6. View largeDownload slide (a) QML (dashed), SMC (solid) and empirically estimated (dotted) mean stationary squared volatility E(σ∞2) for the fifty real data sets. (b) QML (dashed) and SMC (solid) estimated σmin2 for the fifty real data sets. Figure 6. View largeDownload slide (a) QML (dashed), SMC (solid) and empirically estimated (dotted) mean stationary squared volatility E(σ∞2) for the fifty real data sets. (b) QML (dashed) and SMC (solid) estimated σmin2 for the fifty real data sets. We turn to simulation studies in Section 7 to examine the performance of each estimator. It is found that the SMC method provides reliable estimates for all model parameters, in particular much superior estimates of η and ϕ compared with the QML for which large biases in these parameters are clearly present. 6.2 SE comparison To examine the difference between the SEs obtained using the numerical Hessian of the SMC log-likelihood function against those obtained using the parametric bootstrap with 200 simulates, Figure 7 compares these quantities for the fifty real data sets considered. These two SE estimates are seen in general to be very similar across the fifty data sets, in all parameters. The average ratio of numerical Hessian SE estimate over bootstrap SE estimate across the fifty real data sets was found to be 0.98, 0.97, and 0.87, respectively for β,η, and ϕ. Thus in this instance there is a small underestimation in the SEs for β and η with a slightly larger underestimation in those for ϕ using the numerical Hessian SEs. Figure 7. View largeDownload slide Comparison of the SEs obtained using the numerical Hessian of the SMC log-likelihood function (solid) against those obtained using the parametric bootstrap with 200 simulates (dashed) for the fifty real data sets. Figure 7. View largeDownload slide Comparison of the SEs obtained using the numerical Hessian of the SMC log-likelihood function (solid) against those obtained using the parametric bootstrap with 200 simulates (dashed) for the fifty real data sets. 7 Simulation Studies In this section, we perform a total of three simulation studies. The first employs Laplace distributed jumps with parameter values and observation spacings inspired from the real data analysis in Section 6, while the last two are based on experiments in Maller, Muller, and Szimayer (2008) that use normally distributed jumps. 7.1 Parameter Set Inspired from Real Data Analysis We simulate 1000 data sets of a COGARCH with β=2.8×10−9,η=0.2 and ϕ=0.15, taking the driving compound Poisson process to have arrival intensity λ=3.2 with mean zero Laplace distributed jumps such that E(L12)=1. The choice in parameters was based on the average of those obtained, using the SMC method, from the fifty real data sets in Section 6. Each simulated COGARCH process is observed at exactly those times at which observations were made (on the adjusted time scale) during the first trading week, the 4th of January 2015 22:00 GMT to the 9th of January 22:00 GMT. Thus each dataset has n = 7183 irregularly spaced observations with T = 7199.5. The parameter λ was estimated using (22). The estimated λ’s had a mean of 3.2, a standard deviation of 0.056 and mean estimated SE of 0.057 with the estimated 95% confidence intervals yielding a 95.2% empirical coverage probability (ECP), that is, the proportion of estimated 95% confidence intervals (assuming normality) that contained the true λ. The SMC procedure was run with 500, 1000, and 2500 particles allowing us to study the impact of choice in the number of particles. As can be seen in Table 1, as the number of particles increases the average bias and standard deviation both decrease, for all parameters, giving an overall reduction in root mean square error (RMSE). Table 1. Summary of the simulation study of Section 7.1 β×107 QML SMC 500 SMC 1000 SMC 2500 Mean 0.0298 0.0336 0.0315 0.0300 Bias 0.0018 0.0056 0.0035 0.0020 SD 0.0062 0.0078 0.0066 0.0058 RMSE 0.0064 0.0096 0.0075 0.0061 Mean SE NA 0.0056 0.0052 0.0051 95% ECP NA 76.97% 87.93% 92.10% η QML SMC 500 SMC 1000 SMC 2500 Mean 0.1565 0.2241 0.2152 0.2078 Bias −0.0435 0.0241 0.0152 0.0078 SD 0.0250 0.0334 0.0292 0.0264 RMSE 0.0502 0.0412 0.0329 0.0275 Mean SE NA 0.0250 0.0236 0.0233 95% ECP NA 77.02% 86.92% 91.30% ϕ QML SMC 500 SMC 1000 SMC 2500 Mean 0.1069 0.1684 0.1615 0.1553 Bias −0.0431 0.0184 0.0115 0.0053 SD 0.0187 0.0247 0.0218 0.0200 RMSE 0.0470 0.0308 0.0247 0.0207 Mean SE NA 0.0190 0.0179 0.0176 95% ECP NA 78.99% 87.93% 91.50% β×107 QML SMC 500 SMC 1000 SMC 2500 Mean 0.0298 0.0336 0.0315 0.0300 Bias 0.0018 0.0056 0.0035 0.0020 SD 0.0062 0.0078 0.0066 0.0058 RMSE 0.0064 0.0096 0.0075 0.0061 Mean SE NA 0.0056 0.0052 0.0051 95% ECP NA 76.97% 87.93% 92.10% η QML SMC 500 SMC 1000 SMC 2500 Mean 0.1565 0.2241 0.2152 0.2078 Bias −0.0435 0.0241 0.0152 0.0078 SD 0.0250 0.0334 0.0292 0.0264 RMSE 0.0502 0.0412 0.0329 0.0275 Mean SE NA 0.0250 0.0236 0.0233 95% ECP NA 77.02% 86.92% 91.30% ϕ QML SMC 500 SMC 1000 SMC 2500 Mean 0.1069 0.1684 0.1615 0.1553 Bias −0.0431 0.0184 0.0115 0.0053 SD 0.0187 0.0247 0.0218 0.0200 RMSE 0.0470 0.0308 0.0247 0.0207 Mean SE NA 0.0190 0.0179 0.0176 95% ECP NA 78.99% 87.93% 91.50% Notes: Mean: the average of the individual estimates; Bias: the Mean minus the true parameter value; SD: the standard deviation of the individual estimates; Mean SE: the average of the SEs obtained from the Hessian of the SMC log-likelihood; RMSE: the root mean squared error of the individual estimates compared to the true parameter value; 95% ECP: proportion of estimated 95% confidence intervals, based on numerical Hessian SEs, which contained the true parameter. Even with a small number of 500 particles, maximising the SMC log-likelihood produces superior estimates of η and ϕ in terms of RMSE than the QML method. Table 1. Summary of the simulation study of Section 7.1 β×107 QML SMC 500 SMC 1000 SMC 2500 Mean 0.0298 0.0336 0.0315 0.0300 Bias 0.0018 0.0056 0.0035 0.0020 SD 0.0062 0.0078 0.0066 0.0058 RMSE 0.0064 0.0096 0.0075 0.0061 Mean SE NA 0.0056 0.0052 0.0051 95% ECP NA 76.97% 87.93% 92.10% η QML SMC 500 SMC 1000 SMC 2500 Mean 0.1565 0.2241 0.2152 0.2078 Bias −0.0435 0.0241 0.0152 0.0078 SD 0.0250 0.0334 0.0292 0.0264 RMSE 0.0502 0.0412 0.0329 0.0275 Mean SE NA 0.0250 0.0236 0.0233 95% ECP NA 77.02% 86.92% 91.30% ϕ QML SMC 500 SMC 1000 SMC 2500 Mean 0.1069 0.1684 0.1615 0.1553 Bias −0.0431 0.0184 0.0115 0.0053 SD 0.0187 0.0247 0.0218 0.0200 RMSE 0.0470 0.0308 0.0247 0.0207 Mean SE NA 0.0190 0.0179 0.0176 95% ECP NA 78.99% 87.93% 91.50% β×107 QML SMC 500 SMC 1000 SMC 2500 Mean 0.0298 0.0336 0.0315 0.0300 Bias 0.0018 0.0056 0.0035 0.0020 SD 0.0062 0.0078 0.0066 0.0058 RMSE 0.0064 0.0096 0.0075 0.0061 Mean SE NA 0.0056 0.0052 0.0051 95% ECP NA 76.97% 87.93% 92.10% η QML SMC 500 SMC 1000 SMC 2500 Mean 0.1565 0.2241 0.2152 0.2078 Bias −0.0435 0.0241 0.0152 0.0078 SD 0.0250 0.0334 0.0292 0.0264 RMSE 0.0502 0.0412 0.0329 0.0275 Mean SE NA 0.0250 0.0236 0.0233 95% ECP NA 77.02% 86.92% 91.30% ϕ QML SMC 500 SMC 1000 SMC 2500 Mean 0.1069 0.1684 0.1615 0.1553 Bias −0.0431 0.0184 0.0115 0.0053 SD 0.0187 0.0247 0.0218 0.0200 RMSE 0.0470 0.0308 0.0247 0.0207 Mean SE NA 0.0190 0.0179 0.0176 95% ECP NA 78.99% 87.93% 91.50% Notes: Mean: the average of the individual estimates; Bias: the Mean minus the true parameter value; SD: the standard deviation of the individual estimates; Mean SE: the average of the SEs obtained from the Hessian of the SMC log-likelihood; RMSE: the root mean squared error of the individual estimates compared to the true parameter value; 95% ECP: proportion of estimated 95% confidence intervals, based on numerical Hessian SEs, which contained the true parameter. Even with a small number of 500 particles, maximising the SMC log-likelihood produces superior estimates of η and ϕ in terms of RMSE than the QML method. The QML appears to estimate β well, being on par in terms of bias, variance, and RMSE with the SMC employing 2500 particles. However, in terms of η and ϕ, the QML is substantially biased in its estimates, with a large reduction in bias and RMSE achieved using the SMC with only a modest number of 500 particles over the QML. As can be seen in the kernel density plots in Figure 8 the QML’s significant downward bias in estimates of η and ϕ is clearly exhibited, while the SMC with 2500 particles is seen to have very little bias for all model parameters. Figure 8. View largeDownload slide Kernel density plots for the simulation study of Section 7.1: QML (dashed) compared to SMC (solid) with 2500 particles. The percentage bias of the mean of the QML estimates relative to the true values, respectively for β, η, and ϕ, are 6.02%, −27.79%, and −40.28%. The respective percentage biases obtained for the SMC estimates are 6.56%, 3.74%, and 3.43%, substantially lower in regards to η and ϕ. Figure 8. View largeDownload slide Kernel density plots for the simulation study of Section 7.1: QML (dashed) compared to SMC (solid) with 2500 particles. The percentage bias of the mean of the QML estimates relative to the true values, respectively for β, η, and ϕ, are 6.02%, −27.79%, and −40.28%. The respective percentage biases obtained for the SMC estimates are 6.56%, 3.74%, and 3.43%, substantially lower in regards to η and ϕ. Regarding SEs for the QML, Maller, Muller, and Szimayer (2008) use the inverse of the numerical Hessian. However, as the QML does not maximize the true likelihood, it is typically the case that a sandwich covariance estimator (see Heyde (1997)) is required. To date, the appropriate covariance estimator for the QML has not been developed and thus we do not provide estimated SEs and ECPs for this method. ECPs for the SMC estimates of β,η, and ϕ based on SEs calculated through inverting the numerical Hessian of the negative of the SMC log-likelihood function are provided in Table 1. As can be seen, as the number of particles increase, the ECPs based on these numerical Hessian SEs increase toward 95%, driven by lower bias and variability in using more particles. However, even at 2500 particles, the ECPs have not reached 95%, the gap being 2.9%, 3.7%, and 3.5%, respectively for β,η, and ϕ. Recall from Section 6.2, the numerical Hessian SEs by not accounting for the two-step estimation tended to underestimate variability. Thus ECPs could be improved using parametric bootstrap SEs. Note the gap between the attained ECPs and 95% could also in part be due to normality being asymptotic in the number of observations, not just particles. For this simulation study, the average number of function calls and gradient approximations used in the BFGS implementation as well as the required time to evaluate a simulated likelihood in C++ using an Intel Xeon 3.06 GHz processor are displayed in Table 2. Table 2. Computational speed information, for the simulation study of Section 7.1, based on an Intel Xeon 3.06 GHz processor SMC 500 SMC 1000 SMC 2500 Average number of function calls 139.18 126.92 102.62 Average number of gradient calls 17.82 17.45 15.46 Average time required for one function call (seconds) 4.2 8.5 21.6 SMC 500 SMC 1000 SMC 2500 Average number of function calls 139.18 126.92 102.62 Average number of gradient calls 17.82 17.45 15.46 Average time required for one function call (seconds) 4.2 8.5 21.6 Notes: Optimization was performed using the BFGS method implemented in R with likelihood evaluations performed in C++. Table 2. Computational speed information, for the simulation study of Section 7.1, based on an Intel Xeon 3.06 GHz processor SMC 500 SMC 1000 SMC 2500 Average number of function calls 139.18 126.92 102.62 Average number of gradient calls 17.82 17.45 15.46 Average time required for one function call (seconds) 4.2 8.5 21.6 SMC 500 SMC 1000 SMC 2500 Average number of function calls 139.18 126.92 102.62 Average number of gradient calls 17.82 17.45 15.46 Average time required for one function call (seconds) 4.2 8.5 21.6 Notes: Optimization was performed using the BFGS method implemented in R with likelihood evaluations performed in C++. 7.2 Parameter Sets of Maller, Muller, and Szimayer (2008) Interestingly, the QML displayed a similar downward bias in η and ϕ for the simulation studies presented in Maller, Muller, and Szimayer (2008). For comparison we repeat those two simulation studies. The first is based on 5000 equally spaced observations of a COGARCH with parameters β=1,η=0.06, and ϕ=0.0425 and the second is based on 2529 irregularly spaced observations of a COGARCH with parameters β=1.5,η=0.085, and ϕ=0.069. Both COGARCH processes are driven by a compound Poisson process with arrival rate λ = 1 with standard normal jumps. The results are presented respectively for the equally spaced and irregularly spaced simulation study in Tables 3 and 4. In both cases a large reduction in bias is achieved using the SMC over the QML. Note that the conditional variance formula used for the QML was misspecified in Maller, Muller, and Szimayer (2008) and should actually be E(Yi2|σti−12)=(σti−12−βη−ϕ)(1−e−(η−ϕ)Δtiη−ϕ)+βΔtiη−ϕ, (29) this was confirmed through correspondence with two of the authors. Our QML results have been implemented using the correct formula, which accounts for variation between the QML results presented in this article and those reported in Maller, Muller, and Szimayer (2008). Table 3. Summary of the equally spaced simulation study of Section 7.2 β η ϕ λ True 1 0.06 0.0425 1 QML SMC 1000 QML SMC 1000 QML SMC 1000 Mean 1.1622 1.1035 0.0535 0.0621 0.0334 0.0428 1.0010 Bias 0.1622 0.1035 −0.0065 0.0021 −0.0091 0.0003 0.0010 SD 0.5242 0.3888 0.0150 0.0122 0.0078 0.0074 0.0187 RMSE 0.5486 0.4023 0.0164 0.0124 0.0120 0.0074 0.0187 Mean SE NA 0.3319 NA 0.0112 NA 0.0078 0.0186 95% ECP NA 94.1% NA 95.3% NA 94.3% 94.1% β η ϕ λ True 1 0.06 0.0425 1 QML SMC 1000 QML SMC 1000 QML SMC 1000 Mean 1.1622 1.1035 0.0535 0.0621 0.0334 0.0428 1.0010 Bias 0.1622 0.1035 −0.0065 0.0021 −0.0091 0.0003 0.0010 SD 0.5242 0.3888 0.0150 0.0122 0.0078 0.0074 0.0187 RMSE 0.5486 0.4023 0.0164 0.0124 0.0120 0.0074 0.0187 Mean SE NA 0.3319 NA 0.0112 NA 0.0078 0.0186 95% ECP NA 94.1% NA 95.3% NA 94.3% 94.1% Notes: Refer to Table 1 for the definitions of the summary statistics (Mean, SD, etc.). The SMC estimates are significantly less biased and have a lower variance compared to the QML estimates. Table 3. Summary of the equally spaced simulation study of Section 7.2 β η ϕ λ True 1 0.06 0.0425 1 QML SMC 1000 QML SMC 1000 QML SMC 1000 Mean 1.1622 1.1035 0.0535 0.0621 0.0334 0.0428 1.0010 Bias 0.1622 0.1035 −0.0065 0.0021 −0.0091 0.0003 0.0010 SD 0.5242 0.3888 0.0150 0.0122 0.0078 0.0074 0.0187 RMSE 0.5486 0.4023 0.0164 0.0124 0.0120 0.0074 0.0187 Mean SE NA 0.3319 NA 0.0112 NA 0.0078 0.0186 95% ECP NA 94.1% NA 95.3% NA 94.3% 94.1% β η ϕ λ True 1 0.06 0.0425 1 QML SMC 1000 QML SMC 1000 QML SMC 1000 Mean 1.1622 1.1035 0.0535 0.0621 0.0334 0.0428 1.0010 Bias 0.1622 0.1035 −0.0065 0.0021 −0.0091 0.0003 0.0010 SD 0.5242 0.3888 0.0150 0.0122 0.0078 0.0074 0.0187 RMSE 0.5486 0.4023 0.0164 0.0124 0.0120 0.0074 0.0187 Mean SE NA 0.3319 NA 0.0112 NA 0.0078 0.0186 95% ECP NA 94.1% NA 95.3% NA 94.3% 94.1% Notes: Refer to Table 1 for the definitions of the summary statistics (Mean, SD, etc.). The SMC estimates are significantly less biased and have a lower variance compared to the QML estimates. Table 4. Summary of the irregularly spaced simulation study of Section 7.2 β η ϕ λ True 1.5 0.085 0.069 1 QML SMC 1000 QML SMC 1000 QML SMC 1000 Mean 1.7048 1.6438 0.0647 0.0852 0.0477 0.0679 1.0013 Bias 0.2048 0.1438 −0.0203 0.0002 −0.0213 −0.0011 0.0013 SD 1.0212 0.7878 0.0286 0.0291 0.0188 0.0228 0.0269 RMSE 1.0416 0.8008 0.0351 0.0291 0.0284 0.0228 0.0269 Mean SE NA 0.5189 NA 0.0158 NA 0.0125 0.0267 95% ECP NA 88.3% NA 87.2% NA 87.3% 95.2% β η ϕ λ True 1.5 0.085 0.069 1 QML SMC 1000 QML SMC 1000 QML SMC 1000 Mean 1.7048 1.6438 0.0647 0.0852 0.0477 0.0679 1.0013 Bias 0.2048 0.1438 −0.0203 0.0002 −0.0213 −0.0011 0.0013 SD 1.0212 0.7878 0.0286 0.0291 0.0188 0.0228 0.0269 RMSE 1.0416 0.8008 0.0351 0.0291 0.0284 0.0228 0.0269 Mean SE NA 0.5189 NA 0.0158 NA 0.0125 0.0267 95% ECP NA 88.3% NA 87.2% NA 87.3% 95.2% Note: Refer to Table 1 for the definitions of the summary statistics (Mean, SD, etc.). The SMC estimates are significantly less biased than the QML estimates. The variance of the QML estimates for η and ϕ are lower than those of the SMC; however, the SMC still is the superior estimator in terms to RMSE due to the significant biases of the QML. Table 4. Summary of the irregularly spaced simulation study of Section 7.2 β η ϕ λ True 1.5 0.085 0.069 1 QML SMC 1000 QML SMC 1000 QML SMC 1000 Mean 1.7048 1.6438 0.0647 0.0852 0.0477 0.0679 1.0013 Bias 0.2048 0.1438 −0.0203 0.0002 −0.0213 −0.0011 0.0013 SD 1.0212 0.7878 0.0286 0.0291 0.0188 0.0228 0.0269 RMSE 1.0416 0.8008 0.0351 0.0291 0.0284 0.0228 0.0269 Mean SE NA 0.5189 NA 0.0158 NA 0.0125 0.0267 95% ECP NA 88.3% NA 87.2% NA 87.3% 95.2% β η ϕ λ True 1.5 0.085 0.069 1 QML SMC 1000 QML SMC 1000 QML SMC 1000 Mean 1.7048 1.6438 0.0647 0.0852 0.0477 0.0679 1.0013 Bias 0.2048 0.1438 −0.0203 0.0002 −0.0213 −0.0011 0.0013 SD 1.0212 0.7878 0.0286 0.0291 0.0188 0.0228 0.0269 RMSE 1.0416 0.8008 0.0351 0.0291 0.0284 0.0228 0.0269 Mean SE NA 0.5189 NA 0.0158 NA 0.0125 0.0267 95% ECP NA 88.3% NA 87.2% NA 87.3% 95.2% Note: Refer to Table 1 for the definitions of the summary statistics (Mean, SD, etc.). The SMC estimates are significantly less biased than the QML estimates. The variance of the QML estimates for η and ϕ are lower than those of the SMC; however, the SMC still is the superior estimator in terms to RMSE due to the significant biases of the QML. 7.3 Numerical Standard Deviation Versus Statistical Standard Deviation Richard and Zhang (2007) distinguish between the conventional statistical standard deviation (obtained by applying the SMC estimation procedure with the same fixed starting seed on different data sets) and numerical standard deviation (obtained by applying the SMC estimation procedure with different starting seeds on the same data set). For each of the three simulation studies considered in this article, fifty data sets were selected, for each data set the SMC procedure was applied fifty times with a different starting seed each time. The set of fifty seeds was kept identical across data sets. The variability due to data set and SMC seed was investigated through an analysis of variance (ANOVA) using data set as the factor. Across the three simulation studies, for all parameters, it was found that variability due to SMC seed was dominated by variability due to data set. For the simulation study of Section 7.1 (hereafter referred to as Sim Study 1), utilizing 1000 particles, the percentage contribution to the total sum of squares resultant from variability due to SMC seed was found to be about 20% for β,η, and ϕ. Increasing the number of particles to 2500, these quantities decreased to about 10%, 12%, and 13%, respectively for β,η, and ϕ. For the equally spaced simulation study of Section 7.2 (hereafter referred to as Sim Study 2), using 1000 particles, the percentage contribution to the total sum of squares resultant from variability due to SMC seed was about 5% for β,η and 6% for ϕ. The corresponding quantities for the irregularly spaced simulation study of Section 7.2 (hereafter referred to as Sim Study 3) being 8%, 9%, and 10%, respectively, for β,η, and ϕ. The average number of jumps per observation are 3.21, 1, and 1.44, respectively, for Sim Study 1, 2, and 3. Thus we see, unsurprisingly, that the higher the average number of jumps per observation, the higher the numerical standard deviation component. 8 Discussion In conclusion, the QML, although simple to implement and fast to compute, has been shown in a number of simulation studies to exhibit a considerable downward bias in the parameters η and ϕ, whereas very little bias is observed in the SMC estimates of all model parameters. The downward bias in the QML estimates of η and ϕ are not isolated to our simulation studies and have also been observed in the simulation studies presented in Maller, Muller, and Szimayer (2008) and Bibbona and Negri (2015). Recall that the QML employs both a first jump approximation, wherein any change in the observed process is assumed to come from one jump at the start of observation interval; as well as a normal distribution in place of the actual conditional distribution of the observed return. Which of these two approximations has the biggest impact on bias is yet to be determined. As the QML estimates β and the long-run variance β/(η−ϕ) reasonably well, by underestimating η the minimum variance β/η is overestimated, while the peaks in volatility at each jump time will be underestimated when ϕ is underestimated. In totality, the bias in QML will imply that the estimated volatility process in continuous time is flatter than it should be. This could have implications if one is calibrating the COGARCH model for use in option pricing, risk management, or trading strategies. In this article Gaussian and Laplace jump size distributions were utilized. Extension to other jump size distributions is straightforward. The ideas presented here could be easily adapted to the compound Poisson-driven exponential COGARCH(1, 1) process of Huag and Czado (2007) and the compound Poisson-driven asymmetric COGARCH(1, 1) process of Behme, Klppelberg, and Mayr (2014). Observing financial phenomena at higher frequencies increases the presence of zeroes in the data. This supports the use of a compound Poisson driver for the increasing available high-frequency financial data sets. Furthermore, any other pure jump Lévy process can be approximated by a sequence of compound Poisson processes. Although the QML makes no assumptions about the driving Lévy process, in practice if one were to model under the assumption of COGARCH dynamics, knowledge of exactly what the underlying Lévy process is, would be required to simulate sample paths for the pricing and hedging of exotic options, performance evaluation of trading strategies as well as forecasts required for risk management proposes. While a direct comparison with the MCMC approach of Muller (2010) was not conducted, we mention that Muller (2010) employs a single step Gibbs sampler and in a related context, the estimation of Markov Switching GARCH, the single step Gibbs sampler of Bauwens Preminger, and Rombouts (2010) was shown to be a slowly converging and computationally demanding sampler, computationally inferior to the SMC sampling mechanism employed in Bauwens, Dufays, and Rombouts (2014). Application of the Pitt and Malik (2011) procedure, for simulated likelihood inference, for stochastic volatility models can be found in Pitt, Malik, and Doucet (2014). Figure 9. View largeDownload slide Estimated Activities for each of the 120 trading hours (black vertical lines). The trading sessions of certain financial markets are indicated by the horizontal lines. Figure 9. View largeDownload slide Estimated Activities for each of the 120 trading hours (black vertical lines). The trading sessions of certain financial markets are indicated by the horizontal lines. Appendix A: data preprocessing—virtual time scale Let us denote t—as the number of minutes elapsed since the open of the trading week (Sunday 22: 00 GMT). Si(t) the AUD/USD price on week i at time t. Ni the number of observations available on week i. We then define the log returns r(i)(tj)=log(Si(tj)Si(tj−1)) for j=2,…,Ni. (30) To account for intra-week volatility patterns a transformation from the physical time scale to the virtual time scale of Dacorogna et al. (1993) will be applied. The adjustment begins with the empirical scaling law of Muller et al. (1990), which relates |rΔt|¯ the mean absolute log return over a time interval to the size of this interval Δt through |rΔt|¯=cΔt1E. (31) The mean absolute log return |rΔt|¯ for observations spaced one minute, two minutes, up to ten minutes apart were computed. The |rΔt|¯ values for different interval sizes Δt are not totally independent, as the larger intervals are aggregated from the smaller intervals—however as an approximation using simple least squares regression we obtain almost perfect fits with c^=0.0001482751 and E^=2.014276. There are 120 trading hours between Sunday 22:00 GMT to Friday 22:00 GMT. Define the Activity of the h-th hour as a^(h)=(μ^hc^*)E^, (32) where μ^h=∑i=150∑j=2Ni|r(i)(tj)Δtj1E^|I(⌊tj60⌋=h)∑i=150∑j=2NiI(⌊tj60⌋=h) (33) is the mean time standardized absolute log return for trading hour h over all the 50 trading weeks, ⌊.⌋ is the floor function, and c^* is calibrated to satisfy the normalization condition ∑h=1120a^(h)=120. The new virtual observation times will then be given by ϑj=ϑj−1+a^(⌊tj60⌋)(tj−tj−1) ,j=2,…,Niϑ1=0. A physical time trading hour associated with higher volatility results in a higher Activity for that hour. Thus, the new virtual time scale ϑ slows down during physical time trading hours associated with higher volatility as well as speeds up during physical time trading hours associated with lower volatility. In turn, the volatility per unit of virtual time is more balanced. The estimated Activities are displayed in Figure 9. One would expect three periods of higher than normal volatility each day corresponding to the opening hours of the Asian, European, and American Forex markets. The results appear in line with this expectation as three spurts in Activities are observed to begin at 1:00 AM GMT, 7:00 AM GMT, and 1:00 PM GMT each day corresponding to the start of trade of the aforementioned markets. Physical time trading hour h = 1 would be expected to be relatively quiet as only the thinner Australian and New Zealand markets are open. We have a^(1)=0.4996 so virtual time will be running approximately twice as fast during physical trading hour h = 1. On the other hand, trading activity would be expected to pick up during physical time trading hour h = 4 at the open of the Hong Kong market. We have a^(4)=1.1807, so virtual time will be running at approximately 85% the speed of physical time during trading hour h = 4. Supplementary Data Supplementary data are available at Journal of Financial Econometrics online. Footnotes * The authors are grateful to Pierre Del Moral and Arnaud Doucet for providing advice on sequential Monte Carlo, as well as, Ross Maller and Gernot Müller for useful discussions about the COGARCH. The authors would also like to thank two anonymous referees and an Associate Editor for their constructive comments that led to the improvement of the article. This work was supported by the Australian Federal Government through the Australian Postgraduate Award. This research includes computations using the Linux computational cluster Katana supported by the Faculty of Science, UNSW Australia. References Bauwens L. , Dufays A. , Rombouts J. . 2014 . Marginal Likelihood for Markov-Switching and Change-Point GARCH . Journal of Econometrics 178 : 508 – 522 . Google Scholar CrossRef Search ADS Bauwens L. , Preminger A. , Rombouts J. . 2010 . Theory and Inference for a Markov Switching GARCH Model . The Econometrics Journal 13 : 218 – 244 . Google Scholar CrossRef Search ADS Behme A. , Klppelberg C. , Mayr K. . 2014 . Asymmetric COGARCH Processes . Journal of Applied Probability 51 : 161 – 173 . Google Scholar CrossRef Search ADS Bibbona E. , Negri I. . 2015 . Higher Moments and Prediction Based Estimation for the COGARCH(1, 1) Model . Scandinavian Journal of Statistics 42 : 891 – 910 . Google Scholar CrossRef Search ADS Dacorogna M. , Muller U. , Nagler R. , Olsen R. , Pictet O. . 1993 . A Geographical Model for the Daily and Weekly Seasonal Volatility in the Foreign Exchange Market . Journal of International Money and Finance 12 : 413 – 438 . Google Scholar CrossRef Search ADS Doucet A. , Jacob P. , Rubenthaler S. . 2015 . Derivative-Free Estimation of the Score Vector and Observed Information Matrix with Application to State-Space Models. preprint , https://arxiv.org/pdf/1304.5768.pdf (accessed 2 April 2018) . Efron B . 1982) . The Jackknife, the Bootstrap, and Other Resampling Plans . CBMS-NSF Regional Conference Series in Applied Mathematics No . 38 . Gordon N. J. , Salmond D. J. , Smith A. F. . 1993 . A Novel Approach to Non-Linear and Non-Gaussian Bayesian State Estimation . IEE Proceedings F Radar and Signal Processing 140 : 107 – 113 . Google Scholar CrossRef Search ADS Gourieroux C. , Monfort A. . 1996) . Simulation-Based Econometric Methods . Cambridge, UK : Oxford University Press . Haug S. , Kluppelberg C. , Lindner A. , Zapp M. . 2007 . Method of Moment Estimation in the COGARCH(1, 1) Model . The Econometrics Journal 10 : 320 – 341 . Google Scholar CrossRef Search ADS Heyde C . 1997) . Quasi-Likelihood and Its Applications . New York : Springer . Google Scholar CrossRef Search ADS Huag S. , Czado C. . 2007 . An Exponential Continuous Time GARCH Process . Journal of Applied Probability 44 : 960 – 976 . Google Scholar CrossRef Search ADS Kim M. , Lee S. . 2013 . On the Maximum Likelihood Estimation for Irregularly Observed Data from COGARCH(1, 1) Models . REVSTAT - Statistical Journal 11 : 135 – 168 . Kluppelberg C. , Lindner A. , Maller R. . 2004 . A Continous-Time GARCH Process Driven by a Levy Process: Stationarity and Second Order Behaviour . Journal of Applied Probability 41 : 601 – 622 . Google Scholar CrossRef Search ADS Kuppelberg C. , Lindner A. , Maller R. . 2006 . “ Continous Time Volatility Modelling: COGARCH versus Ornstein-Uhlenbeck Models .” In Kabanov Y. , Lipster R. , Stoyanov I. (eds.), The Shiryaev Festschrift: from Stochastic Calculus to Mathematical Finance , 393 – 419 . Berlin : Springer . McFadden D . 1989 . A Method of Simulated Moments for Estimation of Discrete Response Models without Numerical Integration . Econometrica 57 : 995 – 1026 . Google Scholar CrossRef Search ADS Maller R. , Muller G. , Szimayer A. . 2008 . GARCH Modelling in Continuous Time for Irregularly Spaced Time Series Data . Bernoulli 14 : 519 – 542 . Google Scholar CrossRef Search ADS Marin J. M. , Rodriguez-Bernal M. T. , Romero E. . 2015 . Data Cloning Estimation of GARCH and COGARCH Models . Journal of Statistical Computation and Simulation 85 : 1818 – 1831 . Google Scholar CrossRef Search ADS Muller G . 2010 . MCMC Estimation of the COGARCH(1, 1) Model . Journal of Financial Econometrics 8 : 481 – 510 . Google Scholar CrossRef Search ADS Muller U. , Dacorogna M. , Olsen R. , Pictet O. , Schwarz M. , Morgenegg C. . 1990 . Statistical Study of Foreign Exchange Rates, Empirical Evidence of a Price Change Scaling Law, and Intraday Analysis . Journal of Banking and Finance 14 : 1189 – 1208 . Google Scholar CrossRef Search ADS Murphy K. M. , Topel R. H. . 1985 . Estimation and Inference in Two-Step Econometric Models . Journal of Business and Economic Statistics 3 : 370 – 379 . Pitt M. , Malik S. . 2011 . Particle Filters for Continuous Likelihood Evaluation and Maximisation . Journal of Econometrics 165 : 190 – 209 . Google Scholar CrossRef Search ADS Pitt M. , Malik S. , Doucet A. . 2014 . Simulated Likelihood Inference for Stochastic Volatility Models Using Continuous Particle Filtering . Annals of the Institute of Statistical Mathematics 66 : 527 – 552 . Google Scholar CrossRef Search ADS Protter P . 2005) . Stochastic Integration and Differential Equations , 2nd edn . New York : Springer . Google Scholar CrossRef Search ADS Richard J. , Zhang W. . 2007 . Efficient High-Dimensional Importance Sampling . Journal of Econometrics 141 : 1385 – 1411 . Google Scholar CrossRef Search ADS © The Authors (2018). Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal of Financial Econometrics – Oxford University Press

**Published: ** Jun 4, 2018

Loading...

personal research library

It’s your single place to instantly

**discover** and **read** the research

that matters to you.

Enjoy **affordable access** to

over 18 million articles from more than

**15,000 peer-reviewed journals**.

All for just $49/month

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Read from thousands of the leading scholarly journals from *SpringerNature*, *Elsevier*, *Wiley-Blackwell*, *Oxford University Press* and more.

All the latest content is available, no embargo periods.

## “Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”

Daniel C.

## “Whoa! It’s like Spotify but for academic articles.”

@Phil_Robichaud

## “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”

@deepthiw

## “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”

@JoseServera

DeepDyve ## Freelancer | DeepDyve ## Pro | |
---|---|---|

Price | FREE | $49/month |

Save searches from | ||

Create lists to | ||

Export lists, citations | ||

Read DeepDyve articles | Abstract access only | Unlimited access to over |

20 pages / month | ||

PDF Discount | 20% off | |

Read and print from thousands of top scholarly journals.

System error. Please try again!

or

By signing up, you agree to DeepDyve’s Terms of Service and Privacy Policy.

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.

All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

ok to continue