We consider the \emph{approximate minimum selection} problem in presence of \emph{independent random comparison faults}. This problem asks to select one of the smallest $k$ elements in a linearly-ordered collection of $n$ elements by only performing \emph{unreliable} pairwise comparisons: whenever two elements are compared, there is constant probability that the wrong answer is returned. We design a randomized algorithm that solves this problem \emph{with high probability} (w.h.p.) for the whole range of values of $k$ using $O( \log n \cdot ( \frac{n}{k} + \log \log \log n ))$ expected time. Then, we prove that the expected running time of any algorithm that succeeds w.h.p. must be $\Omega(\frac{n}{k}\log n)$, thus implying that our algorithm is optimal, in expectation, for almost all values of $k$ (and it is optimal up to triple-$\log$ factors for $k = \omega(\frac{n}{\log \log \log n})$). These results are quite surprising in the sense that for $k$ between $\Omega(\log n)$ and $c \cdot n$, for any constant $c<1$, the expected running time must still be $\Omega(\frac{n}{k}\log n)$ even in absence of comparison faults. Informally speaking, we show how to deal with comparison errors without any substantial complexity penalty w.r.t.\ the fault-free case. Moreover, we prove that as soon as $k = O( \frac{n}{\log\log n})$, it is possible to achieve a worst-case running time of $O(\frac{n}{k}\log n)$.

Thanks. We have received your report. If we find this content to be in
violation of our guidelines,
we will remove it.

Ok