$(\alpha,\beta)$-Leakage: A Unified Privacy Leakage Measure

Atefeh Gilani, Gowtham R. Kurri, Oliver Kosut, Lalitha Sankar

We introduce a family of information leakage measures called maximal $(\alpha,\beta)$-leakage, parameterized by non-negative real numbers $\alpha$ and $\beta$. The measure is formalized via an operational definition involving an adversary guessing an unknown (randomized) function of the data given the released data. We obtain a simplified computable expression for the measure and show that it satisfies several basic properties such as monotonicity in $\beta$ for a fixed $\alpha$, non-negativity, data processing inequalities, and additivity over independent releases. We highlight the relevance of this family by showing that it bridges several known leakage measures, including maximal $\alpha$-leakage $(\beta=1)$, maximal leakage $(\alpha=\infty,\beta=1)$, local differential privacy [LDP] $(\alpha=\infty,\beta=\infty)$, and local Renyi differential privacy [LRDP] $(\alpha=\beta)$, thereby giving an operational interpretation to local Renyi differential privacy. We also study a conditional version of maximal $(\alpha,\beta)$-leakage on leveraging which we recover differential privacy and Renyi differential privacy. A new variant of LRDP, which we call maximal Renyi leakage, appears as a special case of maximal $(\alpha,\beta)$-leakage for $\alpha=\infty$ that smoothly tunes between maximal leakage ($\beta=1$) and LDP ($\beta=\infty$). Finally, we show that a vector form of the maximal Renyi leakage relaxes differential privacy under Gaussian and Laplacian mechanisms.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment