Empirical risk minimization in the non-interactive local model of differential privacy
Wang, Di; Gaboardi, Marco; Smith, Adam; Xu, Jinhui
In this paper, we study the Empirical Risk Minimization (ERM) problem in the non-interactive
Local Differential Privacy (LDP) model. Previous research on this problem
(Smith et al., 2017) indicates that the sample complexity, to achieve error 𝛼, needs to
be exponentially depending on the dimensionality p for general loss functions. In this
paper, we make two attempts to resolve this issue by investigating conditions on the loss
functions that allow us to remove such a limit. In our first attempt, we show that if the loss
function is (∞, T)-smooth, by using the Bernstein polynomial approximation we can avoid
the exponential dependency in the term of 𝛼. We then propose player-efficient algorithms
with 1-bit communication complexity and O(1) computation cost for each player. The
error bound of these algorithms is asymptotically the same as the original one. With some
additional assumptions, we also give an algorithm which is more efficient for the server. In
our second attempt, we show that for any 1-Lipschitz generalized linear convex loss function,
there is an (𝝐, 𝛿)-LDP algorithm whose sample complexity for achieving error 𝛼 is only linear
in the dimensionality p. Our results use a polynomial of inner product approximation
technique. Finally, motivated by the idea of using polynomial approximation and based on
different types of polynomial approximations, we propose (efficient) non-interactive locally
differentially private algorithms for learning the set of k-way marginal queries and the set
of smooth queries.
↧