#### On redundancy of memoryless sources over countable alphabets

##### Maryam Hosseini, Narayana Santhanam

The minimum average number of bits need to describe a random variable is its entropy, assuming knowledge of the underlying statistics On the other hand, universal compression supposes that the distribution of the random variable, while unknown, belongs to a known set $\cal P$ of distributions. Such universal descriptions for the random variable are agnostic to the identity of the distribution in $\cal P$. But because they are not matched exactly to the underlying distribution of the random variable, the average number of bits they use is higher, and the excess over the entropy used is the "redundancy". This formulation is fundamental to problems not just in compression, but also estimation and prediction and has a wide variety of applications from language modeling to insurance. In this paper, we study the redundancy of universal encodings of strings generated by independent identically distributed (iid) sampling from a set $\cal P$ of distributions over a countable support. We first show that if describing a single sample from $\cal P$ incurs finite redundancy, then $\cal P$ is tight but that the converse does not always hold. If a single sample can be described with finite worst-case-regret (a more stringent formulation than redundancy above), then it is known that length-$n$ iid samples only incurs a diminishing (in $n$) redundancy per symbol as $n$ increases. However, we show it is possible that a collection $\cal P$ incurs finite redundancy, yet description of length-$n$ iid samples incurs a constant redundancy per symbol encoded. We then show a sufficient condition on $\cal P$ such that length-$n$ iid samples will incur diminishing redundancy per symbol encoded.

arrow_drop_up