Abstract for

"Adaptive Metropolis Sampling with Product Distributions"

The Metropolis Hastings (MH) algorithm is a powerful computational technique for sampling a provided high-dimensional distribution P(x) by generating a Markovian random walk X(t) that ergodically traces P. Its applications range from high-dimensional integration (Markov Chain Monte Carlo) to function optimization (simulated annealing). At each iteration of the algorithm, t, a proposal distribution T(x) is sampled to get a point x', and the next point on the random walk, X(t + 1), is either set to the current point X(t), or to x', depending on the ratio T(x') P(X(t)) / P(x') T(X(t)). In this paper we present theoretical and experimental investigations of the relation between MH and a new set of tools called Product Distribution (PD) theory. In particular, we investigate having T be a product distribution, T(x) = T1(x1) T2(x2) ... T(xN). Since such a T is relatively low dimensional, it can be adaptively updated, in a Markovian fashion based on the points visited during the walk, so that is becomes an increasingly accurate approximation to P. Doing this should ensure that X(t + 1) differs from X(t) more often than it does in conventional MH, and therefore that the random walk more efficiently traces out P. There are other benefits to such adaptive MH in addition to more efficient sampling. For example, by generating the product distribution T, adaptive MH provide an approximation to P amenable to data analysis and visualization. Conversely, gradient-based techniques for forming PD approximations to P can be used before the start of a conventional MH, to formulate the (fixed) proposal distribution of that application of MH.