Reliable Computing (2006) 12: 365–369
Towards Optimal Use of Multi-Precision
Arithmetic: A Remark
Department of Computer Science, University of Texas at El Paso, El Paso, TX 79968, USA,
Institute for Reliable Computing, Hamburg University of Technology, Schwarzenbergstr. 95,
D–21071 Hamburg, Germany, and
Waseda University, Faculty of Science and Engineering, 2–4–12 Okubo, Shinjuku-ku, Tokyo
169–0072, Japan, e-mail: email@example.com
(Received: 16 January 2006; accepted: 6 March 2006)
Abstract. If standard-precision computations do not lead to the desired accuracy, then it is reasonable
to increase precision until we reach this accuracy. What is the optimal way of increasing precision?
One possibility is to choose a constant q>1, so that if the precision which requires the time t did not
lead to a success, we select the next precision that requires time q
t. It was shown that among such
strategies, the optimal (worst-case) overhead is attained when q = 2. In this paper, we show that this
“time-doubling” strategy is optimal among all possible strategies, not only among the ones in which
we always increase time by a constant q>1.
Formulation of the problem. In multi-precision arithmetic, it is possible to pick
a precision and make all computations with this precision; see, e.g., , . If we
use validated computations, then after the corresponding computations, we learn
the accuracy of the results.
Usually, we want to compute the result of an algorithm with a given accuracy. We
can start with a certain precision. If this precision leads to the desired results accu-
racy, we are done; if not, we repeat the computations with the increased precision,
The question is: What is the best approach to increasing precision?
A natural approach to solving this problem. We usually have some idea of how
the computation time t depends on the precision: e.g., for addition, the computation
time grows as the number d of digits; for the standard multiplication, the time grows
In view of this known dependence, we can easily transform the precision (number
of digits) into time and vice versa. Therefore, the problem of selecting the precision
d can be reformulated as the problem of selecting the corresponding computation