The Trade-off between Privacy and Fidelity via Ehrhart Theory

Arun Padakandla, P. R. Kumar, Wojciech Szpankowski

As an increasing amount of data is gathered nowadays and stored in databases (DBs), the question arises of how to protect the privacy of individual records in a DB even while providing accurate answers to queries on the DB. Differential Privacy (DP) has gained acceptance as a framework to quantify vulnerability of algorithms to privacy breaches. We consider the problem of how to sanitize an entire DB via a DP mechanism, on which unlimited further querying is performed. While protecting privacy, it is important that the sanitized DB still provide accurate responses to queries. The central contribution of this work is to characterize the amount of information preserved in an optimal DP DB sanitizing mechanism (DSM). We precisely characterize the utility-privacy trade-off of mechanisms that sanitize DBs in the asymptotic regime of large DBs. We study this in an information-theoretic framework by modeling a generic distribution on the data, and a measure of fidelity between the histograms of the original and sanitized DBs. We consider the popular $\mathbb{L}_{1}-$distortion metric that leads to the formulation as a linear program (LP). This optimization problem is prohibitive in complexity with the number of constraints growing exponentially in the parameters of the problem. Leveraging tools from discrete geometry, analytic combinatorics, and duality theorems of optimization, we fully characterize the optimal solution in terms of a power series whose coefficients are the number of integer points on a multidimensional convex polytope studied by Ehrhart in 1967. Employing Ehrhart theory, we determine a simple closed form computable expression for the asymptotic growth of the optimal privacy-fidelity trade-off to infinite precision. At the heart of the findings is a deep connection between the minimum expected distortion and the Ehrhart series of an integral convex polytope.

arrow_drop_up