Appl Math Optim 41:1–7 (2000)
2000 Springer-Verlag New York Inc.
A Simple Proof of the Theorem Concerning Optimality
in a One-Dimensional Ergodic Control Problem
Department of Mathematics, Faculty of Science,
Toyama University, Toyama 930-8555, Japan
Communicated by M. Nisio
Abstract. We give a simple proof of the theorem concerning optimality in a one-
dimensional ergodic control problem. We characterize the optimal control in the
class of all Markov controls. Our proof is probabilistic and does not need to solve
the corresponding Bellman equation. This simpliﬁes the proof.
Key Words. Ergodic control, Markov controls,
AMS Classiﬁcation. 93E20, 93C40.
We consider the ergodic control problem to minimize the cost
J (v) = lim sup
) dt,v(·) ∈ B(R,), (1.1)
subject to one-dimensional stochastic differential equations
)) dt + σ(x
= x, (1.2)
over the class B(R,)of all -valued Borel measurable functions v(·) on R, where is
a compact set in a separable metric space, x ∈ R is a given constant, b(·, ·) is a bounded
and continuous function on R × , σ(·) is a bounded and Borel measurable function on
The author was partially supported by the Grant-in-Aid for Encouragement of Young Scientists (No.
09740140) by the Ministry of Education, Science, and Culture of Japan.