Eul*_*ter 1 python random performance machine-learning mcmc
我正在尝试在 Python 中实现Metropolis算法(Metropolis-Hastings 算法的更简单版本)。
这是我的实现:
def Metropolis_Gaussian(p, z0, sigma, n_samples=100, burn_in=0, m=1):
"""
Metropolis Algorithm using a Gaussian proposal distribution.
p: distribution that we want to sample from (can be unnormalized)
z0: Initial sample
sigma: standard deviation of the proposal normal distribution.
n_samples: number of final samples that we want to obtain.
burn_in: number of initial samples to discard.
m: this number is used to take every mth sample at the end
"""
# List of samples, check feasibility of first sample and set z to first sample
sample_list = [z0]
_ = p(z0)
z = z0
# set a counter of samples for burn-in
n_sampled = 0
while len(sample_list[::m]) < n_samples:
# Sample a candidate from Normal(mu, sigma), draw a uniform sample, find acceptance probability
cand = np.random.normal(loc=z, scale=sigma)
u = np.random.rand()
try:
prob = min(1, p(cand) / p(z))
except (OverflowError, ValueError) as error:
continue
n_sampled += 1
if prob > u:
z = cand # accept and make candidate the new sample
# do not add burn-in samples
if n_sampled > burn_in:
sample_list.append(z)
# Finally want to take every Mth sample in order to achieve independence
return np.array(sample_list)[::m]
Run Code Online (Sandbox Code Playgroud)
当我尝试将我的算法应用于指数函数时,它只需要很少的时间。然而,当我在t 分布上尝试它时,它需要很长时间,考虑到它没有做那么多计算。这是您可以复制我的代码的方式:
t_samples = Metropolis_Gaussian(pdf_t, 3, 1, 1000, 1000, m=100)
plt.hist(t_samples, density=True, bins=15, label='histogram of samples')
x = np.linspace(min(t_samples), max(t_samples), 100)
plt.plot(x, pdf_t(x), label='t pdf')
plt.xlim(min(t_samples), max(t_samples))
plt.title("Sampling t distribution via Metropolis")
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.legend()
Run Code Online (Sandbox Code Playgroud)
这段代码需要很长时间才能运行,我不知道为什么。在我的Metropolis_Gaussian代码中,我试图通过
函数pdf_t
定义如下
from scipy.stats import t
def pdf_t(x, df=10):
return t.pdf(x, df=df)
Run Code Online (Sandbox Code Playgroud)
我之前回答过一个类似的问题。我在那里提到的许多事情(不计算每次迭代的当前可能性,预先计算随机创新等)都可以在这里使用。
您的实现的其他改进不会使用列表来存储您的样本。相反,您应该为样本预先分配内存并将它们存储为数组。类似的东西samples = np.zeros(n_samples)
比在每次迭代时附加到列表更有效。
您已经提到您试图通过不记录老化样本来提高效率。这是一个好主意。您还可以通过仅记录每个第 m 个样本来执行类似的细化技巧,因为np.array(sample_list)[::m]
无论如何您都将在 return 语句中丢弃这些样本。你可以通过改变来做到这一点:
# do not add burn-in samples
if n_sampled > burn_in:
sample_list.append(z)
Run Code Online (Sandbox Code Playgroud)
到
# Only keep iterations after burn-in and for every m-th iteration
if n_sampled > burn_in and n_sampled % m == 0:
samples[(n_sampled - burn_in) // m] = z
Run Code Online (Sandbox Code Playgroud)
还值得注意的是,您不需要计算min(1, p(cand) / p(z))
,只需计算即可p(cand) / p(z)
。我意识到形式上min
是必要的(以确保概率在 0 和 1 之间)。但是,计算,我们不需要分钟,因为如果p(cand) / p(z) > 1
然后p(cand) / p(z)
是始终大于u
。
将所有这些放在一起,并预先计算随机创新、接受概率,u
并且仅在您真正需要时才计算可能性,我想出了:
def my_Metropolis_Gaussian(p, z0, sigma, n_samples=100, burn_in=0, m=1):
# Pre-allocate memory for samples (much more efficient than using append)
samples = np.zeros(n_samples)
# Store initial value
samples[0] = z0
z = z0
# Compute the current likelihood
l_cur = p(z)
# Counter
iter = 0
# Total number of iterations to make to achieve desired number of samples
iters = (n_samples * m) + burn_in
# Sample outside the for loop
innov = np.random.normal(loc=0, scale=sigma, size=iters)
u = np.random.rand(iters)
while iter < iters:
# Random walk innovation on z
cand = z + innov[iter]
# Compute candidate likelihood
l_cand = p(cand)
# Accept or reject candidate
if l_cand / l_cur > u[iter]:
z = cand
l_cur = l_cand
# Only keep iterations after burn-in and for every m-th iteration
if iter > burn_in and iter % m == 0:
samples[(iter - burn_in) // m] = z
iter += 1
return samples
Run Code Online (Sandbox Code Playgroud)
如果我们看一下性能,我们会发现此实现比原始实现快 2 倍,这对于一些小的更改来说还不错。
In [1]: %timeit Metropolis_Gaussian(pdf_t, 3, 1, n_samples=100, burn_in=100, m=10)
205 ms ± 2.16 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [2]: %timeit my_Metropolis_Gaussian(pdf_t, 3, 1, n_samples=100, burn_in=100, m=10)
102 ms ± 1.12 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Run Code Online (Sandbox Code Playgroud)