Scalable Stochastic Gradient Riemannian Langevin Dynamics in Non-Diagonal Metrics

Forskningsoutput: TidskriftsbidragArtikelVetenskapligPeer review

Sammanfattning

Stochastic-gradient sampling methods are often used to perform Bayesian inference on neural networks. It has been observed that the methods in which notions of differential geometry are included tend to have better performances, with the Riemannian metric improving posterior exploration by accounting for the local curvature. However, the existing methods often resort to simple diagonal metrics to remain computationally efficient. This loses some of the gains. We propose two non-diagonal metrics that can be used in stochastic-gradient samplers to improve convergence and exploration but have only a minor computational overhead over diagonal metrics. We show that for fully connected neural networks (NNs) with sparsity-inducing priors and convolutional NNs with correlated priors, using these metrics can provide improvements. For some other choices the posterior is sufficiently easy also for the simpler metrics.
Originalspråkengelska
Tidskrift Transactions on machine learning research
Volym2023
Nummer8
ISSN2835-8856
StatusPublicerad - aug. 2023
MoE-publikationstypA1 Tidskriftsartikel-refererad

Vetenskapsgrenar

  • 113 Data- och informationsvetenskap

Citera det här