Appl Math Optim (2007) 56: 303–311
Robust Dynamics and Control of a Partially Observed
R.J. Elliott · W.P. Malcolm · J.P. Moore
Published online: 28 August 2007
© Springer Science+Business Media, LLC 2007
Abstract In a seminal paper, Martin Clark (Communications Systems and Random
Process Theory, Darlington, 1977, pp. 721–734, 1978) showed how the ﬁltered dy-
namics giving the optimal estimate of a Markov chain observed in Gaussian noise
can be expressed using an ordinary differential equation. These results offer substan-
tial beneﬁts in ﬁltering and in control, often simplifying the analysis and an in some
settings providing numerical beneﬁts, see, for example Malcolm et al. (J. Appl. Math.
Stoch. Anal., 2007, to appear).
Clark’s method uses a gauge transformation and, in effect, solves the Wonham-
Zakai equation using variation of constants. In this article, we consider the optimal
control of a partially observed Markov chain. This problem is discussed in Elliott
et al. (Hidden Markov Models Estimation and Control, Applications of Mathematics
Series, vol. 29, 1995). The innovation in our results is that the robust dynamics of
Clark are used to compute forward in time dynamics for a simpliﬁed adjoint process.
A stochastic minimum principle is established.
Keywords Reference probability · Jump Markov systems · Hybrid dynamics ·
Viterbi algorithm · Filtering · Smoothing
R.J. Elliott (
Haskayne School of Business, Scurﬁeld Hall, University of Calgary, 2500 University Drive NW,
Calgary, AB, Canada T2N 1N4
W.P. Malcolm · J.P. Moore
National ICT Australia, Locked Bag 8001, Canberra ACT 2601, Australia