Learning rate adaptation for differentially private stochastic gradient descent

Research output: Conference materialsPoster

Abstract

Differential privacy complicates learning procedures because each access to the data carries a privacy cost. For example, standard parameter tuning with a validation set cannot be easily applied. We propose an algorithm for the adaptation of the learning rate for differentially private stochastic gradient descent (DP-SGD) that avoids the need for validation set use. The idea for the adaptiveness comes from the technique of extrapolation: to get an estimate for the error against the gradient flow which underlies SGD, we compare the result obtained by one full step and two half-steps. We prove the privacy of the method using the moments accountant mechanism. We show in experiments that the method is competitive with optimally tuned DP-SGD and differentially private version of Adam.
Original languageFinnish
Publication statusPublished - 8 Dec 2018
MoE publication typeNot Eligible
EventNeurIPS 2018 Workshop: Privacy Preserving Machine Learning - Montreal, Canada
Duration: 8 Dec 20178 Dec 2018

Conference

ConferenceNeurIPS 2018 Workshop: Privacy Preserving Machine Learning
CountryCanada
CityMontreal
Period08/12/201708/12/2018

Fields of Science

  • 113 Computer and information sciences

Cite this

Koskela, A. H., & Honkela, A. J. H. (2018). Learning rate adaptation for differentially private stochastic gradient descent. Poster session presented at NeurIPS 2018 Workshop: Privacy Preserving Machine Learning, Montreal, Canada.