Reliable Computing 8: 97–113, 2002.
2002 Kluwer Academic Publishers. Printed in the Netherlands.
Interval-Valued Finite Markov Chains
IGOR O. KOZINE
Systems Analysis Department, Risø National Laboratory, P.O. Box 49, 4000 Roskilde, Denmark,
LEV V. UTKIN
Department of Computer Science, Forest Technical Academy, Institutski per. 5, St.Petersburg,
194021, Russia, e-mail: email@example.com
(Received: 24 November 2000; accepted: 6 July 2001)
Abstract. The requirement that precise state and transition probabilities be available is often not
realistic because of cost, technical difﬁculties or the uniqueness of the situation under study. Expert
judgements, generic data, heterogeneous and partial information on the occurrences of events may be
sources of the probability assessments. All this source information cannot produce precise probabil-
ities of interest without having to introduce drastic assumptions often of quite an arbitrary nature. In
this paper the theory of interval-valued coherent previsions is employed to generalise discrete Markov
chains to interval-valued probabilities. A general procedure of interval-valued probability elicitation
is analysed as well. In addition, examples are provided.
The importance of Markov chain theory to many sciences and applications has been
widely recognised. This theory is employed in the social and biological sciences,
in reliability theory to assess the probabilities of states of reparable systems, by
decision-makers in different areas, etc. However, there are some obstacles in apply-
ing Markov chains in practical situations. The major problem is that a large number
of precise probability estimates is necessary to fully specify a model. The require-
ment that precise state and transition probabilities be available is often not realistic.
In practice there are often few samples of occurrences to produce conﬁdent precise
estimates. In addition, the evidence about the likelihood of the occurrence of an
event may be subjective, or the results of comparative judgements, or a conclusion
drawn from generic data. Moreover, some information provides evidence about the
moments of a random value, while other information gives evidence about mean
values like average losses or gains. All the evidence is chance related and partial.
From these considerations, two questions arise: How can all available sources of
information be used to quantify the present state of knowledge, preferably without
having to introduce new assumptions; and how can the degree of uncertainty be pre-
served while propagating source knowledge to the likelihood of state occurrences
in the future?