PROBLEMS OF INFORMATION TRANSMISSION
A translation of Problemy Peredachi Informatsii


Volume 56, Number 2, April–June, 2020
Back to contents page

CONTENTS                   Powered by MathJax

 

Comparison of Contraction Coefficients for $f$-Divergences
A. Makur and L. Zheng
pp. 103–156

Abstract—Contraction coefficients are distribution dependent constants that are used to sharpen standard data processing inequalities for $f$-divergences (or relative $f$-entropies) and produce so-called “strong” data processing inequalities. For any bivariate joint distribution, i.e., any probability vector and stochastic matrix pair, it is known that contraction coefficients for $f$-divergences are upper bounded by unity and lower bounded by the contraction coefficient for $\chi^2$-divergence. In this paper, we elucidate that the upper bound is achieved when the joint distribution is decomposable, and the lower bound can be achieved by driving the input $f$-divergences of the contraction coefficients to zero. Then, we establish a linear upper bound on the contraction coefficients of joint distributions for a certain class of $f$-divergences using the contraction coefficient for $\chi^2$-divergence, and refine this upper bound for the salient special case of Kullback–Leibler (KL) divergence. Furthermore, we present an alternative proof of the fact that the contraction coefficients for KL and $\chi^2$-divergences are equal for bivariate Gaussian distributions (where the former coefficient may impose a bounded second moment constraint). Finally, we generalize the well-known result that contraction coefficients of stochastic matrices (after extremizing over all possible probability vectors) for all nonlinear operator convex $f$-divergences are equal. In particular, we prove that the so-called “less noisy” preorder over stochastic matrices can be equivalently characterized by any nonlinear operator convex $f$-divergence. As an application of this characterization, we also derive a generalization of Samorodnitsky's strong data processing inequality.

 

New Upper Bounds in the Hypothesis Testing Problem with Information Constraints
M. V. Burnashev
pp. 157–172

Abstract—We consider a hypothesis testing problem where a part of data cannot be observed. Our helper observes the missed data and can send us a limited amount of information about them. What kind of this limited information will allow us to make the best statistical inference? In particular, what is the minimum information sufficient to obtain the same results as if we directly observed all the data? We derive estimates for this minimum information and some other similar results.

 

Detecting Cycles of Length 8 in the Tanner Graph of a QC-LDPC Code Based on Protograph Analysis
A. V. Kharin, K. N. Zavertkin, and A. A. Ovinnikov
pp. 173–184

Abstract—For cycles of length 8 in a Tanner graph, we propose an identification procedure based on the analysis of paths in a protograph. We formulate and prove a number of theorems that introduce identification rules for cycles and restrict the number of subgraphs to be analyzed. To distinguish between them, we propose a number of parameters that uniquely determine the group of analyzed paths in the protograph.

 

On Adaptive Estimation of Linear Functionals from Observations against White Noise
G. K. Golubev
pp. 185–200

Abstract—We consider the problem of adaptive estimation of a linear functional of an unknown multivariate vector from its observations against white Gaussian noise. As a family of estimators for the functional, we use those generated by projection estimators of the unknown vector, and the main problem is to select the best estimator in this family. The goal of the paper is to explain and mathematically justify a simple statistical idea used in adaptive (i.e., observation-based) choice of the best estimator of a linear functional from a given family of estimators. We also discuss generalizations of the considered statistical model and the proposed estimation method, which allow to cover a broad class of statistical problems.