Appl Math Optim 41:171–197 (2000)
2000 Springer-Verlag New York Inc.
On the Bellman Equation for Inﬁnite Horizon Problems
with Unbounded Cost Functional
F. Da Lio
Dipartimento di Matematica P. e A., Universit`adiPadova,
via Belzoni 7, 35131 Padova, Italy
Abstract. We study a class of inﬁnite horizon control problems for nonlinear
systems, which includes the Linear Quadratic (LQ) problem, using the Dynamic
Programming approach. Sufﬁcient conditions for the regularity of the value function
are given. The value function is compared with sub- and supersolutions of the
Bellman equation and a uniqueness theorem is proved for this equation among
locally Lipschitz functions bounded below. As an application it is shown that an
optimal control for the LQ problem is nearly optimal for a large class of small
unbounded nonlinear and nonquadratic pertubations of the same problem.
KeyWords. Viscositysolution,Optimalcontrol, Hamilton–Jacobi–Bellman equa-
tion, Inﬁnite horizon, Linear Quadratic problem.
AMS Classiﬁcation. 49J20, 49L25, 49N10.
This is the second of two papers devoted to the study of the Hamilton–Jacobi–Bellman
(HJB) equation for a large class of unbounded optimal control problems for fully non-
(t) = f (y(t), α(t )), t ≥ 0,α(t) ∈ A,
y(0) = x.
Here we are interested in the following optimal control problem: minimize the inﬁnite
horizon cost functional