Reliable Computing (2007) 13:489–504
DOI: 10.1007/s11155-007-9044-7 © Springer 2008
Computing the Pessimism of Inclusion
GILLES CHABERT and LUC JAULIN
ENSIETA, 2 rue Franc¸ois Verny 29806 Brest Cedex 9, France, e-mail: email@example.com,
(Received: 24 May 2007; accepted: 15 October 2007)
Abstract. “Computing the pessimism” means bounding the overestimation produced by an inclusion
function. There are two important distinctions with classical error analysis. First, we do not consider
the image by an inclusion function but the distance between this image and the exact image (in the
set-theoretical sense). Second, the bound is computed over a inﬁnite set of intervals.
To our knowledge, this issue is not covered in the literature and may have a potential of appli-
cations. We ﬁrst motivate and deﬁne the concept of pessimism. An algorithm is then provided for
computing the pessimism, in the univariate case. This algorithm is general-purpose and works with
any inclusion function. Next, we prove that the algorithm converges to the optimal bound under mild
assumptions. Finally, we derive a second algorithm for automatically controlling the pessimism, i.e.,
determining where an inclusion function is accurate.
In this paper, we consider a continuous function
(the deﬁnition domain
is assumed to be
for simplicity). The radius, the middle, the magnitude,
and the mignitude of an interval [x] are denoted by rad[x], mid[x], |[x]|,and[x]
One fundamental tool that interval analysis provides is the notion of inclusion
function (see, e.g., , ). An inclusion function F is a mapping from
of intervals) to
∈ IR ƒ
([x]) denotes the set-theoretical image of [x]by
An inclusion function allows to perform a safe evaluation of a function on
a whole set of values and this addresses the question of reliability: Thanks to
inclusion functions, interval methods are robust to round-off errors, imprecision
on input data, numerical truncatures, etc. Besides, the question of accuracy, i.e.,
whether the result is accurate or not, is also a crucial matter for numerical methods.
Clearly, if the overestimation produced by the underlying inclusion function is
really too large, the method will suffer from a lack of accuracy.
The accuracy issue has led people to design smart inclusion functions using,
e.g., multi-precision arithmetic , Taylor models  or Bernstein expansions for
polynomials . See also  for a survey on inclusion functions.