PROBLEMS OF INFORMATION TRANSMISSION
A translation of Problemy Peredachi Informatsii


Volume 48, Number 4, October–December, 2012
Back to contents page

CONTENTS                   Powered by MathJax

 

Analysis of the Relation between Properties of LDPC Codes and the Tanner Graph
V. V. Zyablov and P. S. Rybin
pp. 297–323

Abstract—A new method for estimating the number of errors guaranteed to be corrected by a low-density parity-check code is proposed. The method is obtained by analyzing edges with special properties of an appropriate Tanner graph. In this paper we consider binary LDPC codes with constituent single-parity-check and Hamming codes and an iterative decoding algorithm. Numerical results obtained for the proposed lower bound exceed similar results for the best previously known lower bounds.

 

Asymptotic Single-Trial Strategies for GMD Decoding with Arbitrary Error-Erasure Tradeoff
J. H. Weber, V. R. Sidorenko, C. Senger, and K. A. S. Abdel-Ghaffar
pp. 324–333

Abstract—Generalized minimum distance (GMD) decoders allow for combining some virtues of probabilistic and algebraic decoding approaches at a low complexity. We investigate single-trial strategies for GMD decoding with arbitrary error-erasure tradeoff, based on either erasing a fraction of the received symbols or erasing all symbols whose reliability values are below a certain threshold. The fraction/threshold may be either static or adaptive, where adaptive means that the erasing is a function of the channel output. Adaptive erasing based on a threshold is a new technique that has not been investigated before. An asymptotic approach is used to evaluate the error-correction radius for each strategy. Both known and new results appear as special cases of this general framework.

 

On Walsh Code Assignment
B. S. Tsybakov and A. B. Tsybakov
pp. 334–341

Abstract—We consider the problem of orthogonal variable spreading Walsh code assignments. The aim is to provide assignments that can avoid both complicated signaling from the BS to the users and blind rate and code detection amongst a great number of possible codes. The assignments considered here use partitioning of all users into several pools. Each pool can use its own codes, which are different for different pools. Each user has only a few codes assigned to it within the pool. We state the problem as a combinatorial one expressed in terms of a binary $n\times k$ matrix $\boldsymbol{M}$ where $n$ is the number of users and $k$ is the number of Walsh codes in the pool. A solution to the problem is given as a construction of a matrix $\boldsymbol{M}$ which has the assignment property defined in the paper. Two constructions of such $\boldsymbol{M}$ are presented under different conditions on $n$ and $k$. The first construction is optimal in the sense that it gives the minimal number of Walsh codes $\ell$ assigned to each user for given $n$ and $k$. The optimality follows from a proved necessary condition for the existence of $\boldsymbol{M}$ with the assignment property. In addition, we propose a simple algorithm of optimal assignment for the first construction.

 

On One Asymptotic Formula for the Euler Constant
E. A. Karatsuba
pp. 342–346

Abstract—We derive a new asymptotic representation for the Euler constant $\gamma$.

 

A Remark on the Problem of Nonnegative $k$-Subset Sums
H. Aydinian and V. M. Blinovsky
pp. 347–351

Abstract—Given a set of $n$ real numbers with a nonnegative sum, consider the family of all its $k$-element subsets with nonnegative sums. How small can the size of this family be? We show that this problem is closely related to a problem raised by Ahlswede and Khachatrian in [1]. The latter, in a special case, is nothing else but the problem of determining a minimal number $c_n(k)$ such that any $k$-uniform hypergraph on $n$ vertices having $c_n(k)+1$ edges has a perfect fractional matching. We show that results obtained in [1] can be applied for the former problem. Moreover, we conjecture that these problems have in general the same solution.

 

Achievable Complexity-Performance Tradeoffs in Lossy Compression
A. Gupta, S. Verdú, and T. Weissman
pp. 352–375

Abstract—We present several results related to the complexity-performance tradeoff in lossy compression. The first result shows that for a memoryless source with rate-distortion function $R(D)$ and a bounded distortion measure, the rate-distortion point $(R(D)+\gamma,D+\varepsilon)$ can be achieved with constant decompression time per (separable) symbol and compression time per symbol proportional to $(\lambda_1/\varepsilon)^{\lambda_2/\gamma^2}$, where $\lambda_1$ and $\lambda_2$ are source dependent constants. The second result establishes that the same point can be achieved with constant decompression time and compression time per symbol proportional to $(\rho_1/\gamma)^{\rho_2/\varepsilon^2}$. These results imply, for any function $g(n)$ that increases without bound arbitrarily slowly, the existence of a sequence of lossy compression schemes of blocklength $n$ with $O(ng(n))$ compression complexity and $O(n)$ decompression complexity that achieve the point $(R(D),D)$ asymptotically with increasing blocklength. We also establish that if the reproduction alphabet is finite, then for any given $R$ there exists a universal lossy compression scheme with $O(ng(n))$ compression complexity and $O(n)$ decompression complexity that achieves the point $(R,D(R))$ asymptotically for any stationary ergodic source with distortion-rate function $D(\cdot)$.

 

Complexity-Compression Tradeoffs in Lossy Compression via Efficient Random Codebooks and Databases
C. Gioran and I. Kontoyiannis
pp. 376–394

Abstract—The compression-complexity trade-off of lossy compression algorithms that are based on a random codebook or a random database is examined. Motivated, in part, by recent results of Gupta–Verdú–Weissman (GVW) and their underlying connections with the pattern-matching scheme of Kontoyiannis’ lossy Lempel–Ziv algorithm, we introduce a nonuniversal version of the lossy Lempel–Ziv method (termed LLZ). The optimality of LLZ for memoryless sources is established, and its performance is compared to that of the GVW divide-and-conquer approach. Experimental results indicate that the GVW approach often yields better compression than LLZ, but at the price of much higher memory requirements. To combine the advantages of both, we introduce a hybrid algorithm (HYB) that utilizes both the divide-and-conquer idea of GVW and the single-database structure of LLZ. It is proved that HYB shares with GVW the exact same rate-distortion performance and implementation complexity, while, like LLZ, requiring less memory, by a factor which may become unbounded, depending on the choice of the relevant design parameters. Experimental results are also presented, illustrating the performance of all three methods on data generated by simple discrete memoryless sources. In particular, the HYB algorithm is shown to outperform existing schemes for the compression of some simple discrete sources with respect to the Hamming distortion criterion.