welcome/
java-mcmc/
software/
papers/
links/
email me

A random field is simply another name for a random function F(x), where x ranges over some state space. As described in this other coupling demonstration, we can build Markov chains X_t by simply chaining together a series of independent random functions: X_{t+1} = F_t(X_t) for t = 0, 1,2, 3 etc. Each such function F_t has the same fixed probabilities of taking a particular value, i.e. there exists a transition kernel p(x,A) such that


\text{Prob}(F_t(x) \in A) = p(x,A)

Suppose now that we modify the functions F_t in such a way as to keep these probabilities intact, then by chaining those new functions which we now call G_t say, we obtain a Markov chain Y_{t+1}
= G_t(Y_t) which is statistically the same as X_t but may have other new properties. Whenever we want to make several chains, all of which have the same transition kernel p(x,A), meet over time it makes sense to modify the original F_t, replacing them with suitably chosen G_t which are designed to facilitate coalescence. Perfect simulation requires such constructions, for example. See also here.

Suppose that we draw a graph of the functions F_t(x) and G_t(x). Since these functions are random, the graphs will not always look the same, but certain qualitative features may still stand out. For example, the graphs may be flat or horizontal in some places. In that case the locations x which belong to that region would all map to the same value, that is G_t(x) = G_t(y) for all x and y within the region where the graph of G_t is flat. If G_t was used to drive two Markov chains Y_{t+1} = G_t(Y_t) and Y'_{t+1} = G_t(Y'_t) say, and if both Y_t and Y'_t both happen to be inside this region, then we would have Y_{t+1} = Y'_{t+1} afterwards and forever. The Markov chains have coupled. Obviously, the chance of coupling two random chains (which both use the same series G_t) increases the more flat areas exist in each G_t.

To view this applet, please install the Java Plugin.

If you press the button above, a separate window should open demonstrating this scheme in the context of Markov chain Monte Carlo simulation. Starting with a random function F_t which is not necessarily flat anywhere, but has a given set of transition probabilities, we generate successive modifications of this function by flattening randomly chosen regions in such a way that we do not destroy these probabilities. The exact details are described in an accompanying paper (ps/pdf). Eventually, we end up with a `fully flattened' function G_t to replace F_t.

The controls of this applet are similar to those for the Metropolis-Hastings applet. The main window under the target distribution contains the successive modifications of the random field F_t whose transition probabilities correspond to q(x,y), the proposal distribution for a Metropolis-Hastings algorithm. The third and fourth panels on the left represent histograms of respectively the modified and unmodified random functions (also called fields here) after metropolization. If you wait long enough, both histograms will look alike, as they should since the transition probabilities have been preserved.