We attempt to better understand randomization in local distributed graph algorithms by exploring how randomness is used and what we can gain from it: - We first ask the question of how much randomness is needed to obtain efficient randomized algorithms. We show that for all locally checkable problems for which polylog $n$-time randomized algorithms exist, there are such algorithms even if either (I) there is a only a single (private) independent random bit in each polylog $n$-neighborhood of the graph, (II) the (private) bits of randomness of different nodes are only polylog $n$-wise independent, or (III) there are only polylog $n$ bits of global shared randomness (and no private randomness). - Second, we study how much we can improve the error probability of randomized algorithms. For all locally checkable problems for which polylog $n$-time randomized algorithms exist, we show that there are such algorithms that succeed with probability $1-n^{-2^{\varepsilon(\log\log n)^2}}$ and more generally $T$-round algorithms, for $T\geq$ polylog $n$, that succeed with probability $1-n^{-2^{\varepsilon\log^2T}}$. We also show that polylog $n$-time randomized algorithms with success probability $1-2^{-2^{\log^\varepsilon n}}$ for some $\varepsilon>0$ can be derandomized to polylog $n$-time deterministic algorithms. Both of the directions mentioned above, reducing the amount of randomness and improving the success probability, can be seen as partial derandomization of existing randomized algorithms. In all the above cases, we also show that any significant improvement of our results would lead to a major breakthrough, as it would imply significantly more efficient deterministic distributed algorithms for a wide class of problems.

Thanks. We have received your report. If we find this content to be in
violation of our guidelines,
we will remove it.

Ok