Appl Math Optim (2009) 60: 297–339
On Finite-Difference Approximations for Normalized
István Gyöngy · David Šiška
Published online: 9 July 2009
© Springer Science+Business Media, LLC 2009
Abstract A class of stochastic optimal control problems involving optimal stopping
is considered. Methods of Krylov (Appl. Math. Optim. 52(3):365–399, 2005)are
adapted to investigate the numerical solutions of the corresponding normalized Bell-
man equations and to estimate the rate of convergence of ﬁnite difference approxi-
mations for the optimal reward functions.
Keywords Finite-difference approximations · Normalized Bellman equations ·
Fully nonlinear equations · Optimal stopping and control
Stochastic optimal control and optimal stopping problems have many applications
in mathematical ﬁnance, portfolio optimization, economics and statistics (sequential
analysis). Optimal stopping problems can be in some cases solved analytically .
With most problems, one must resort to numerical approximations of the solutions.
One approach is to use controlled Markov chains as approximations to controlled
diffusion processes, see e.g. . A thorough account of this approach is available
We are interested in the rate of convergence of ﬁnite difference approximations to
the payoff function of optimal stopping and control problems. Using the method of
School of Mathematics and Maxwell Institute, University of Edinburgh, King’s Buildings,
Edinburgh, EH9 3JZ, UK
D. Šiška (
FIRST FRG, BNP Paribas, 10 Harewood Avenue, London, NW1 6AA, UK