Problems of Information Transmission, Vol. 38, No. 4, 2002, pp. 280–295. Translated from Problemy Peredachi Informatsii, No. 4, 2002, pp. 37–55.
Original Russian Text Copyright
2002 by Handlery, H¨ost, Johannesson, Zyablov.
INFORMATION THEORY AND CODING THEORY
A Distance Measure Tailored to Tailbiting Codes
M. Handlery, S. H¨ost, R. Johannesson, and V. V. Zyablov
Received June 19, 2002; in ﬁnal form, August 29, 2002
Abstract—The error-correcting capability of tailbiting codes generated by convolutional en-
coders is described. In order to obtain a description beyond what the minimum distance d
of the tailbiting code implies, the active tailbiting segment distance is introduced. The descrip-
tion of correctable error patterns via active distances leads to an upper bound on the decoding
block error probability of tailbiting codes. The necessary length of a tailbiting code so that
its minimum distance is equal to the free distance d
of the convolutional code encoded by
the same encoder is easily obtained from the active tailbiting segment distance. This is useful
when designing and analyzing concatenated convolutional codes with component codes that
are terminated using the tailbiting method. Lower bounds on the active tailbiting segment
distance and an upper bound on the ratio between the tailbiting length and memory of the
convolutional generator matrix such that d
are derived. Furthermore, aﬃne
lower bounds on the active tailbiting segment distance suggest that good tailbiting codes are
generated by convolutional encoders with large active-distance slopes.
Terminating rate R = b/c convolutional codes into rate R = K/N block codes using the tailbiting
method [1, 2] often leads to codes that are as powerful as the best linear block codes .
When using the tailbiting termination method, the convolutional encoder state at time t =0
must be equal to the encoder state at time t = . In the sequel, we call a block code obtained by
applying the tailbiting technique to a convolutional encoder a tailbiting code. It is often useful to
represent tailbiting codes by their tailbiting trellises. Such a tailbiting trellis consists of identical
trellis sections corresponding to a total of K = b information symbols and N = c code symbols.
The number of codewords is M =2
and the rate is R = K/N = b/c. Extensive lists of
good tailbiting codes were given in . For simplicity, we consider binary codes only.
Traditionally, the minimum distance is used to estimate the error-correcting capability of a block
code. There is, however, no description of which error patterns with more than (d
can be corrected. It is our goal to give a description of the error-correcting capability of a tailbiting
code which exceeds what the minimum distance argument predicts. In order to do this, we introduce
as a tool the active tailbiting segment distance. The deﬁnition of and the notation for the active
tailbiting segment distance follows the concept of the earlier-introduced active distances , which,
in their part, can be regarded as a variant of the extended distances for unit-memory convolutional
codes [6, 7]. A reﬁned description of error patterns that are correctable by a tailbiting code leads
to a new upper bound on the decoding block error probability.
It is well known that a convolutional code can correct more error events than the free distance
implies. Any number of errors can be corrected if the errors are suﬃciently sparse. Active distances
determine the required sparseness of error events so that a given number of errors is guaranteed to
be corrected; they characterize which error patterns can be corrected (see  and [8, Chapter 3]).
Supported in part by the Royal Swedish Academy of Science in cooperation with the Russian Academy
of Sciences and in part by the Swedish Research Council for Engineering Sciences, Grant 98-501.
2002 MAIK “Nauka/Interperiodica”