"Bayesian solution of an inverse problem for indirect measurement M = AU + epsilon is considered, where U is a function on a domain of R-d. Here A is a smoothing linear operator and epsilon is Gaussian white noise. The data is a realization m(k) of the random variable M-k = P(k)AU + P-k epsilon, where P-k is a linear, finite dimensional operator related to measurement device. To allow computerized inversion, the unknown is discretized as U-n = TnU, where T-n is a finite dimensional projection, leading to the computational measurement model M-kn = P(k)AU(n) + P-k epsilon. Bayes formula gives then the posterior distribution pi(kn)(u(n) vertical bar m(kn)) similar to Pi(n)(u(n)) exp(-1/2 parallel to m(kn) - PkAun parallel to(2)(2)) in R-d, and the mean u(kn) := integral u(n) pi(kn)(u(n) vertical bar m(k)) du(n) is considered as the reconstruction of U. We discuss a systematic way of choosing prior distributions Pi(n) for all n >= n(0) > 0 by achieving them as projections of a distribution in a infinite-dimensional limit case. Such choice of prior distributions is discretization-invariant in the sense that Pi(n) represent the same a priori information for all n and that the mean u(kn) converges to a limit estimate as k, n -> infinity. Gaussian smoothness priors and wavelet-based Besov space priors are shown to be discretization invariant. In particular, Bayesian inversion in dimension two with B-11(1) prior is related to penalizing the l(1) norm of the wavelet coefficients of U."
- 111 Matematiikka