Synthetic likelihood (SL) is a strategy for parameter inference when the likelihood function is analytically or computationally intractable. In SL, the likelihood function of the data is replaced by a multivariate Gaussian density for summary statistics compressing the observed data. SL requires simulation of many replicate datasets at every parameter value considered by a sampling algorithm, such as MCMC, making the method computationally-intensive. We propose two strategies to alleviate the computational burden imposed by SL algorithms. We first introduce a novel MCMC algorithm for SL where the proposal distribution is sequentially tuned. Second, we exploit strategies borrowed from the correlated particle filters literature, to improve the MCMC mixing in a SL framework. Our methods enable inference for challenging case studies when the MCMC is initialised in low posterior probability regions of the parameter space, where standard samplers failed. Our goal is to provide ways to make the best out of each expensive MCMC iteration with SL algorithms, which will broaden the scope of these methods for models with costly simulators. To illustrate the advantages stemming from our framework we consider three benchmark examples, including estimation of parameters for a cosmological model and a stochastic model with highly non-Gaussian summary statistics.
|Status||Insänt - 30 okt 2020|
- 111 Matematik