Let me summarize the basic idea of the Metropolis algorithm here. The idea is from the famous Japanese text book on Bayesian modeling, so called
"Green book".
Preliminary
Let
q be a parameter to be sampled,
L(q) be the likelihood of the parameter, and
Y be the data observed. Suppose
q follows a distribution
p(q|Y), and
p(q|Y) said to be stationary distribution if
p^* \sim p(q|Y) where
p^* is another sample from the Metropolis algorithm, and if
p(q|Y) is stationary distribution, the equality
\displaystyle p(q|Y)=\frac{L(q)}{\sum_q L(q)} \propto L(q) holds from the definition.
Derivation
Since
q^* is selected randomly, then the probability where
L(q^*)>L(q) is
0.5, and the probability that
p^* is accepted is:
\begin{align}
p(q\to q^*) = 0.5 \times 1,
\nonumber
\end{align}
and consider the opposite situation where we already have
q^* and
q is proposed. The probability
q is accepted is:
\begin{align}
p(q^*\to q) = 0.5 \times \frac{L(q)}{L(q^*)}.
\nonumber
\end{align}
Bind together both equations above, we have:
\begin{align}
\frac{p(q\to q^*)}{p(q^*\to q)} &= \frac{0.5 \times 1}{\displaystyle 0.5 \times \frac{L(q)}{L(q^*)}}.
\nonumber \\
L(q^*)p(q^*\to q) &= L(q)p(q\to q^*)
\nonumber \\
p(q^*|Y)p(q^*\to q) &= p(q|Y)p(q\to q^*)
\nonumber
\end{align}
Take the sum with respect to
q:
\begin{align}
\sum_q p(q^*|Y)p(q^*\to q) &= \sum_q p(q|Y)p(q\to q^*)
\nonumber \\
p(q^*|Y) \sum_q p(q^*\to q) &= \sum_q p(q|Y)p(q\to q^*)
\nonumber \\
p(q^*|Y) &= \sum_q p(q|Y)p(q\to q^*)
\nonumber
\end{align}
The last expression describes the fact that
q^* also follows the stationary distribution
p(q|Y).
Detailed Balance Condition
More general expression is:
\begin{align}
f(\theta|\theta')f(\theta'|x) = f(\theta'|\theta)f(\theta|x).
\end{align}
Comments
Post a Comment