PROBLEMS OF INFORMATION TRANSMISSION
A translation of Problemy Peredachi Informatsii


Volume 51, Number 2, April–June, 2015
Back to contents page

CONTENTS                   Powered by MathJax

 

On Multipartite Superactivation of Quantum Channel Capacities
M. E. Shirokov
pp. 87–102

Abstract—We consider a generalization of the notion of superactivation of quantum channel capacities to the case of $n>2$ channels. An explicit example of such superactivation for the 1-shot quantum zero-error capacity is constructed for $n=3$. An interpretation of this example in terms of quantum measurements is given.

 

Information-Geometric Equivalence of Transportation Polytopes
M. Kovačević, I. Stanojević, and V. Šenk
pp. 103–109

Abstract—This paper deals with transportation polytopes in the probability simplex (i.e., sets of categorical bivariate probability distributions with prescribed marginals). Information projections between such polytopes are studied, and a sufficient condition is described under which these mappings are homeomorphisms.

 

Almost Disjunctive List-Decoding Codes
A. G. D'yachkov, I. V. Vorob'ev, N. A. Polyansky, and V. Yu. Shchukin
pp. 110–131

Abstract—We say that an $s$-subset of codewords of a binary code $X$ is $s_L$-bad in $X$ if there exists an $L$-subset of other codewords in $X$ whose disjunctive sum is covered by the disjunctive sum of the given $s$ codewords. Otherwise, this $s$-subset of codewords is said to be $s_L$-good in $X$. A binary code $X$ is said to be a list-decoding disjunctive code of strength $s$ and list size $L$ (an $s_L$-LD code) if it does not contain $s_L$-bad subsets of codewords. We consider a probabilistic generalization of $s_L$-LD codes; namely, we say that a code $X$ is an almost disjunctive $s_L$-LD code if the fraction of $s_L$-good subsets of codewords in $X$ is close to $1$. Using the random coding method on the ensemble of binary constant-weight codes, we establish lower bounds on the capacity and error exponent of almost disjunctive $s_L$-LD codes. For this ensemble, the obtained lower bounds are tight and show that the capacity of almost disjunctive $s_L$-LD codes is greater than the zero-error capacity of disjunctive $s_L$-LD codes.

 

On Error Correction with Errors in Both the Channel and Syndrome
S. G. Vlǎduţ, G. A. Kabatiansky, and V. V. Lomakov
pp. 132–138

Abstract—We address the problem of error correction by linear block codes under the assumption that the syndrome of a received vector is found with errors. We propose a construction of parity-check matrices which allow to solve the syndrome equation even with an erroneous syndrome, in particular, parity-check matrices with minimum redundancy, which are analogs of Reed–Solomon codes for this problem. We also establish analogs of classical coding theory bounds, namely the Hamming, Singleton, and Gilbert–Varshamov bounds. We show that the new problem can be considered as a generalization of the well-known Ulam's problem on searching with a lie and as a discrete analog of the compressed sensing problem.

 

On Separability of the Classes of Homogeneous and Transitive Perfect Binary Codes
I. Yu. Mogilnykh and F. I. Solov'eva
pp. 139–147

Abstract—By the example of perfect binary codes, we prove the existence of binary homogeneous nontransitive codes. Thereby, taking into account previously obtained results, we establish a hierarchical picture of extents of linearity for binary codes; namely, there is a strict inclusion of the class of binary linear codes in the class of binary propelinear codes, which are strictly included in the class of binary transitive codes, which, in turn, are strictly included in the class of binary homogeneous codes. We derive a transitivity criterion for perfect binary codes of rank greater by one than the rank of the Hamming code of the same length.

 

On Limit Distributions of the Time of First Passage over a High Level
M. V. Burnashev and G. K. Golubev
pp. 148–164

Abstract—We describe a simple method of finding the limit distribution of the time of first passage over a high level for a random sequence (or random process). The method reduces the considered problem to a well-studied problem of finding the distribution of this random sequence (or process) at a certain fixed time. This allows to consider not only traditional sums of independent random variables but also sums of dependent random variables. As examples, sums of independent random variables with stable limit distribution functions, sequential testing of hypotheses, and problems of the first passage over a high level for Gaussian processes with strong dependencies are considered.

 

Independence Numbers and Chromatic Numbers of Some Distance Graphs
A. V. Bobu, O. A. Kostina, and A. E. Kupriyanov
pp. 165–176

Abstract—We study a family of distance graphs in $\mathbb R^n$. We present bounds for independence numbers which are asymptotically tight as $n\to\infty$. We substantially improve upper bounds on chromatic numbers of these graphs, and in a number of cases we give explicit constructions of independence sets.

 

One-Armed Bandit Problem for Parallel Data Processing Systems
A. V. Kolnogorov
pp. 177–191

Abstract—We consider the minimax setting for the one-armed bandit problem, i.e., for the two-armed bandit problem with a known distribution function of incomes corresponding to the first action. Incomes that correspond to the second action have normal distribution functions with unit variance and an unknown mathematical expectation. According to the main theorem of game theory, the minimax strategy and minimax risk are sought for as Bayesian, corresponding to the worst-case prior distribution. Results can be applied to parallel data processing systems if there are two processing methods available with an a priori known efficiency of the first.

 

Coupling of Probability Distributions and an Extremal Problem for the Divergence
V. V. Prelov
pp. 192–199

Abstract—Let $X$ and $Y$ be discrete random variables having probability distributions $P_X$ and $P_Y$, respectively. A necessary and sufficient condition is obtained for the existence of an $\alpha$-coupling of these random variables, i.e., for the existence of their joint distribution such that $\Pr\{X=Y\}=\alpha$, where $\alpha,\, 0\le\alpha\le 1$, is a given constant. This problem is closely related with the problem of determining the minima of the divergences $D(P_Z\,\|\,P_X)$ and $D(P_X\,\|\,P_Z)$ over all probability distributions $P_Z$ of a random variable $Z$ given $P_X$ and under the condition that $\Pr\{Z=X\}=\alpha$. An explicit solution for this problem is also obtained.

 

On the Source-Channel Separation Theorem for Infinite Source Alphabets
A. Aghajan, S. J. Zahabi, M. Khosravifard, and T. A. Gulliver
pp. 200–204

Abstract—The single-user source-channel separation theorem has been proved for many classes of sources and channels, including sources with finite or countably infinite alphabets. Typically, the source-channel separation theorem is first proved for sources with a finite alphabet, and then the results are extended to sources with a countably infinite alphabet. This paper considers the direct extension of the source-channel separation theorem for some classes of sources with a finite alphabet to a countably infinite alphabet. Specifically, we provide a solution for memoryless sources and arbitrary channels. It is then discussed how this approach may be extended to the case of general sources and arbitrary channels.