Learning rate adaptation for differentially private stochastic gradient descent

Tutkimustuotos: KonferenssimateriaalitPosteriTutkimusvertaisarvioitu

Kuvaus

Differential privacy complicates learning procedures because each access to the data carries a privacy cost. For example, standard parameter tuning with a validation set cannot be easily applied. We propose an algorithm for the adaptation of the learning rate for differentially private stochastic gradient descent (DP-SGD) that avoids the need for validation set use. The idea for the adaptiveness comes from the technique of extrapolation: to get an estimate for the error against the gradient flow which underlies SGD, we compare the result obtained by one full step and two half-steps. We prove the privacy of the method using the moments accountant mechanism. We show in experiments that the method is competitive with optimally tuned DP-SGD and differentially private version of Adam.
Alkuperäiskielisuomi
TilaJulkaistu - 8 joulukuuta 2018
OKM-julkaisutyyppiEi sovellu
TapahtumaNeurIPS 2018 Workshop: Privacy Preserving Machine Learning - Montreal, Kanada
Kesto: 8 joulukuuta 20178 joulukuuta 2018

Konferenssi

KonferenssiNeurIPS 2018 Workshop: Privacy Preserving Machine Learning
MaaKanada
KaupunkiMontreal
Ajanjakso08/12/201708/12/2018

Tieteenalat

  • 113 Tietojenkäsittely- ja informaatiotieteet

Lainaa tätä

Koskela, A. H., & Honkela, A. J. H. (2018). Learning rate adaptation for differentially private stochastic gradient descent. Posterin esittämispaikka: NeurIPS 2018 Workshop: Privacy Preserving Machine Learning, Montreal, Kanada.
Koskela, Antti Herman ; Honkela, Antti Juho Henrikki. / Learning rate adaptation for differentially private stochastic gradient descent. Posterin esittämispaikka: NeurIPS 2018 Workshop: Privacy Preserving Machine Learning, Montreal, Kanada.
@conference{189d39851d06457483fb9edd6685167d,
title = "Learning rate adaptation for differentially private stochastic gradient descent",
abstract = "Differential privacy complicates learning procedures because each access to the data carries a privacy cost. For example, standard parameter tuning with a validation set cannot be easily applied. We propose an algorithm for the adaptation of the learning rate for differentially private stochastic gradient descent (DP-SGD) that avoids the need for validation set use. The idea for the adaptiveness comes from the technique of extrapolation: to get an estimate for the error against the gradient flow which underlies SGD, we compare the result obtained by one full step and two half-steps. We prove the privacy of the method using the moments accountant mechanism. We show in experiments that the method is competitive with optimally tuned DP-SGD and differentially private version of Adam.",
keywords = "113 Tietojenk{\"a}sittely- ja informaatiotieteet",
author = "Koskela, {Antti Herman} and Honkela, {Antti Juho Henrikki}",
year = "2018",
month = "12",
day = "8",
language = "suomi",
note = "NeurIPS 2018 Workshop: Privacy Preserving Machine Learning ; Conference date: 08-12-2017 Through 08-12-2018",

}

Koskela, AH & Honkela, AJH 2018, 'Learning rate adaptation for differentially private stochastic gradient descent' NeurIPS 2018 Workshop: Privacy Preserving Machine Learning, Montreal, Kanada, 08/12/2017 - 08/12/2018, .

Learning rate adaptation for differentially private stochastic gradient descent. / Koskela, Antti Herman; Honkela, Antti Juho Henrikki.

2018. Posterin esittämispaikka: NeurIPS 2018 Workshop: Privacy Preserving Machine Learning, Montreal, Kanada.

Tutkimustuotos: KonferenssimateriaalitPosteriTutkimusvertaisarvioitu

TY - CONF

T1 - Learning rate adaptation for differentially private stochastic gradient descent

AU - Koskela, Antti Herman

AU - Honkela, Antti Juho Henrikki

PY - 2018/12/8

Y1 - 2018/12/8

N2 - Differential privacy complicates learning procedures because each access to the data carries a privacy cost. For example, standard parameter tuning with a validation set cannot be easily applied. We propose an algorithm for the adaptation of the learning rate for differentially private stochastic gradient descent (DP-SGD) that avoids the need for validation set use. The idea for the adaptiveness comes from the technique of extrapolation: to get an estimate for the error against the gradient flow which underlies SGD, we compare the result obtained by one full step and two half-steps. We prove the privacy of the method using the moments accountant mechanism. We show in experiments that the method is competitive with optimally tuned DP-SGD and differentially private version of Adam.

AB - Differential privacy complicates learning procedures because each access to the data carries a privacy cost. For example, standard parameter tuning with a validation set cannot be easily applied. We propose an algorithm for the adaptation of the learning rate for differentially private stochastic gradient descent (DP-SGD) that avoids the need for validation set use. The idea for the adaptiveness comes from the technique of extrapolation: to get an estimate for the error against the gradient flow which underlies SGD, we compare the result obtained by one full step and two half-steps. We prove the privacy of the method using the moments accountant mechanism. We show in experiments that the method is competitive with optimally tuned DP-SGD and differentially private version of Adam.

KW - 113 Tietojenkäsittely- ja informaatiotieteet

M3 - Posteri

ER -

Koskela AH, Honkela AJH. Learning rate adaptation for differentially private stochastic gradient descent. 2018. Posterin esittämispaikka: NeurIPS 2018 Workshop: Privacy Preserving Machine Learning, Montreal, Kanada.