PROBLEMS OF INFORMATION TRANSMISSION
A translation of Problemy Peredachi Informatsii


Volume 26, Number 4, October–December, 1990
Back to contents page

CONTENTS                   Powered by MathJax

 

Symbol Error Probability for Convolutional Codes
M. V. Burnashev and D. L. Cohn
pp. 289–298

Abstract—A new approach is proposed to the study of statistical characteristic of the Viterbi decoder. We derive an exact formula that expresses the bit error probability of a given convolutional code in certain new terms. This formula, in particular, makes it possible to analyze the asymptotic error probability in the presence of low noise.

 

$E$-Capacity of an Arbitrarily Varying Channel with an Informed Coder
M. E. Haroutunian
pp. 299–305

Abstract—Spherical packing and random coding bounds are constructed for the $E$-capacity of an arbitrarily varying channel with average and maximum error probability for the case when the sequence of channel states is known during coding and unknown during decoding [R. Ahlswede, IEEE Trans. Inf. Theory, 32, No. 5, 621–629 (1986)].

 

Fast Adaptive Coding Algorithm
B. Ya. Ryabko
pp. 305–317

Abstract—Let $x_1,x_2\dots x_N$, $N\ge1$, be a word in a finite alphabet $A$. We consider sequence coding methods, i.e., methods that encode the letters $x_1,x_2,\dots$ by words in a binary alphabet, where the code of the letter $x_i$ depends only on $x_1\dots x_{i-1}$. (These codes are effective for data compression in computer memory [1–3].) For large alphabets $A$ (when $|A|\to\infty$), the code proposed in [1] has minimum redundancy of order $O(1)$ bits and the letter coding/decoding time is $O(|A|\log|A|)$. The implementation of the “book stack” method of [5] described in [2, 4] has time $O(\log^2|A|)$ and redundancy $\log\log|A|$ bits. In the present paper, we propose an algorithm which combines the advantages of the codes from [1] and [5, 2]: it has minimum redundancy of $O(1)$ bits and coding/decoding time of $O(\log^2|A|)$, which is close to the obvious lower bound $O(\log|A|)$.

 

Construction of Linear Covering Codes
A. A. Davydov
pp. 317–331

Abstract—A linear covering code construction is proposed, which given any binary code with covering radius $R$ constructs an infinite family of binary codes with the same covering radius. Infinite families of $R\ge 2$ linear binary covering codes are constructed with better parameters than known codes.

 

Pipeline Decoding of Embedded Trellis Codes: Error-Tolerance Analysis
F. A. Taubin and A. N. Trofimov
pp. 332–343

Abstract—We consider a decoding procedure based on alternate decoding of embedded trellis codes (pipeline decoding). Upper bounds are derived on decoding error probability of a specific trellis code with embedded structure in a Gaussian channel. The pipeline algorithm is shown to be asymptotically optimal (as $E_s/N_0\to\infty$). In addition to the main error probability bound, we obtain a so-called reduced (simplified) bound, which asymptotically coincides with the main bound.

 

Carrier Phase Estimators in Phase-Shift Keying Modulation
V. A. Lazarev and R. Z. Khasminskii
pp. 344–353

Abstract—Various recursive methods are proposed for estimation of the carrier phase φ0 in biphase phase-shift keying modulation. Asymptotically efficient procedures are constructed for both known and unknown signal-to-noise ratios. Simplified procedures are proposed, the asymptotic efficiency of which is only slightly inferior to that of the optimal procedures.

 

Random Multiple Access in Binary Feedback Channel
B. S. Tsybakov and A. N. Beloyarov
pp. 354–366

Abstract—We consider random multiple access algorithms of packets to a channel with “empty–not empty” (E–NE) and “success–no success” (S–NS) binary feedback. Upper bounds on mean packet delays are determined for these algorithms.

 

Bayesian Learning for Pattern Classification in the Class of Density Functions Representable as the Square of the Sum of an Orthogonal Series
A. M. Chikin
pp. 367–373

Abstract—We solve the Bayesian learning problem for pattern classification in a wide class of smooth densities. A simple lower bound on risk is obtained, characterizing the potential solvability of the classification problem using a learning sample of a given (preferably small) volume.

 

INDEX
pp. 375–381