使用 TensorFlow Probability 的 Edward2 的简单哈密顿量 Monte Carlo 示例
Simple Hamiltonian Monte Carlo Example with TensorFlow Probability's Edward2
爱德华的例子
由于 Edward
已弃用并且需要旧版本的 TensorFlow,因此可以为以下示例创建专用虚拟环境
$ python3 --version
Python 3.6.8
$ python3 -m venv edward
$ source edward/bin/activate
(edward) $ pip3 install --upgrade pip setuptools wheel
(edward) $ cat edward.txt
tensorflow==1.7
edward~=1.3
scipy~=1.2
pandas~=0.24
matplotlib~=3.0
(edward) $ pip3 install -r edward.txt
我有一个 非常 简单的最小工作示例,它使用哈密顿量 Monte Carlo 和爱德华 edward_old.py
#!/usr/bin/env python3
import numpy as np
import scipy.stats
import tensorflow as tf
import edward as ed
import pandas as pd
import matplotlib.pyplot as plt
def generate_samples(data, n_samples):
# Pick initial point for MCMC chains based on the data
low, med, high = np.percentile(data, (16, 50, 84))
mu_init = np.float32(med)
t_init = np.float32(np.log(0.5 * (high - low)))
# Build a very simple model
mu = ed.models.Uniform(-1.0, 1.0)
t = ed.models.Uniform(*np.log((0.05, 1.0), dtype=np.float32))
X = ed.models.Normal(
loc=tf.fill(data.shape, mu), scale=tf.fill(data.shape, tf.exp(t))
)
# Emperical samples of a sclar
q_mu = ed.models.Empirical(params=tf.Variable(tf.fill((n_samples,), mu_init)))
q_t = ed.models.Empirical(params=tf.Variable(tf.fill((n_samples,), t_init)))
# Run inference using HMC to generate samples.
with tf.Session() as sess:
inference = ed.HMC({mu: q_mu, t: q_t}, data={X: data})
inference.run(step_size=0.01, n_steps=10)
mu_samples, t_samples = sess.run([q_mu.params, q_t.params])
return mu_samples, t_samples
def visualize(samples, mu_grid, sigma_grid):
fig, ax = plt.subplots(1, 1, figsize=(6, 5))
ax.scatter(samples['mu'], samples['sigma'], s=5, lw=0, c='black')
ax.set_xlim(mu_grid[0], mu_grid[-1])
ax.set_ylim(sigma_grid[0], sigma_grid[-1])
ax.set_title('Edward')
ax.set_xlabel('$\mu$')
ax.set_ylabel('$\sigma$')
plt.savefig('edward_old.pdf')
def main():
np.random.seed(0)
tf.set_random_seed(0)
# Generate pseudodata from draws from a single normal distribution
dist_mean = 0.0
dist_std = 0.5
n_events = 5000
toy_data = scipy.stats.norm.rvs(dist_mean, dist_std, size=n_events)
mu_samples, t_samples = generate_samples(toy_data, n_events)
samples = pd.DataFrame({'mu': mu_samples, 'sigma': np.exp(t_samples)})
n_grid = 50
mu_grid = np.linspace(*np.percentile(mu_samples, (0.5, 99.5)), n_grid)
sigma_grid = np.linspace(*np.exp(np.percentile(t_samples, (0.5, 99.5))), n_grid)
visualize(samples, mu_grid, sigma_grid)
if __name__ == '__main__':
main()
通过
产生下图
(edward) $ python3 edward_old.py
Edward2 示例
但是,当我尝试在以下环境中使用 TensorFlow Probability and Edward2 复制它时
$ python3 --version
Python 3.6.8
$ python3 -m venv tfp-edward2
$ source tfp-edward2/bin/activate
(tfp-edward2) $ pip3 install --upgrade pip setuptools wheel
(tfp-edward2) $ cat tfp-edward2.txt
tensorflow~=1.13
tensorflow-probability~=0.6
scipy~=1.2
pandas~=0.24
matplotlib~=3.0
(tfp-edward2) $ pip3 install -r tfp-edward2.txt
以及以下来自 edward_old.py
的 generate_samples
在名为 edward2.py
的文件中的更改
#!/usr/bin/env python3
import numpy as np
import scipy.stats
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow_probability import edward2 as ed
import pandas as pd
import matplotlib.pyplot as plt
def generate_samples(data, n_samples):
# Pick initial point for MCMC chains based on the data
low, med, high = np.percentile(data, (16, 50, 84))
mu_init = np.float32(med)
t_init = np.float32(np.log(0.5 * (high - low)))
def model(data_shape):
mu = ed.Uniform(
low=tf.fill(data_shape, -1.0), high=tf.fill(data_shape, 1.0), name="mu"
)
t = ed.Uniform(
low=tf.log(tf.fill(data_shape, 0.05)),
high=tf.log(tf.fill(data_shape, 1.0)),
name="t",
)
x = ed.Normal(loc=mu, scale=tf.exp(t), name="x")
return x
log_joint = ed.make_log_joint_fn(model)
def target_log_prob_fn(mu, t):
"""Target log-probability as a function of states."""
return log_joint(data.shape, mu=mu, t=t, x=data)
step_size = tf.get_variable(
name='step_size',
initializer=0.01,
use_resource=True, # For TFE compatibility
trainable=False,
)
num_burnin_steps = 1000
hmc_kernel = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
num_leapfrog_steps=5,
step_size=step_size,
step_size_update_fn=tfp.mcmc.make_simple_step_size_update_policy(
num_adaptation_steps=int(num_burnin_steps * 0.8)
),
)
# How should these be done?
q_mu = tf.random_normal(data.shape, mean=mu_init)
q_t = tf.random_normal(data.shape, mean=t_init)
states, kernel_results = tfp.mcmc.sample_chain(
num_results=n_samples,
current_state=[q_mu, q_t],
kernel=hmc_kernel,
num_burnin_steps=num_burnin_steps,
)
# Initialize all constructed variables.
init_op = tf.global_variables_initializer()
# Run the inference using HMC to generate samples
with tf.Session() as sess:
init_op.run()
states_, results_ = sess.run([states, kernel_results])
mu_samples, t_samples = states_[0][0], states_[1][0]
return mu_samples, t_samples
运行
(tfp-edward2) $ python3 edward2.py
表明存在一些明显的问题。我认为我没有正确地制定 ed.models.Empirical
的等价物,所以如果有关于那个或其他任何我做错的想法,那就太好了。
我已经尝试按照“Upgrading from Edward to Edward2”示例进行操作,但我对它们的理解还不足以从 deep_exponential_family
模型中使用的示例转移到此示例.
我为自己制造的问题完全弄乱了分布的形状。一开始我没能正确理解的是,我的 tfp.mcmc.sample_chain
的 current_state
应该是代表链初始位置的标量 (shape==()
)。一旦我意识到这一点,就很明显这些位置 q_mu
和 q_t
的形状完全错误,应该是根据数据
确定的位置的样本平均值
q_mu = tf.reduce_mean(tf.random_normal((1000,), mean=mu_init))
q_t = tf.reduce_mean(tf.random_normal((1000,), mean=t_init))
因为这些值是标量,所以我也一直在错误地创建模型的形状。我一直在创建与我的数据形状相同的随机变量样本,错误地认为这只是将 x
的形状移动到 mu
和 t
的形状。当然 mu
和 t
是来自它们各自均匀分布的标量随机变量,然后是 x
正态分布的参数,从中抽取 data.shape
个样本.
def model(data_shape):
mu = ed.Uniform(low=-1.0, high=1.0, name="mu")
t = ed.Uniform(low=tf.log(0.05), high=tf.log(1.0), name="t")
x = ed.Normal(
loc=tf.fill(data_shape, mu), scale=tf.fill(data_shape, tf.exp(t)), name="x"
)
return x
一旦完成,唯一剩下要做的就是现在正确访问状态
with tf.Session() as sess:
init_op.run()
states_, results_ = sess.run([states, kernel_results])
mu_samples, t_samples = (states_[0], states_[1])
并生成下面的图像
(tfp-edward2) $ python3 edward2.py
这与使用 Edward
的原始版本非常匹配。
完全更正的脚本如下
#!/usr/bin/env python3
import numpy as np
import scipy.stats
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow_probability import edward2 as ed
import pandas as pd
import matplotlib.pyplot as plt
def generate_samples(data, n_samples):
# Pick initial point for MCMC chains based on the data
low, med, high = np.percentile(data, (16, 50, 84))
mu_init = np.float32(med)
t_init = np.float32(np.log(0.5 * (high - low)))
def model(data_shape):
mu = ed.Uniform(low=-1.0, high=1.0, name="mu")
t = ed.Uniform(low=tf.log(0.05), high=tf.log(1.0), name="t")
x = ed.Normal(
loc=tf.fill(data_shape, mu), scale=tf.fill(data_shape, tf.exp(t)), name="x"
)
return x
log_joint = ed.make_log_joint_fn(model)
def target_log_prob_fn(mu, t):
"""Target log-probability as a function of states."""
return log_joint(data.shape, mu=mu, t=t, x=data)
step_size = tf.get_variable(
name='step_size',
initializer=0.01,
use_resource=True, # For TFE compatibility
trainable=False,
)
num_burnin_steps = 1000
hmc_kernel = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
num_leapfrog_steps=5,
step_size=step_size,
step_size_update_fn=tfp.mcmc.make_simple_step_size_update_policy(
num_adaptation_steps=int(num_burnin_steps * 0.8)
),
)
# Initial states of chains
q_mu = tf.reduce_mean(tf.random_normal((1000,), mean=mu_init))
q_t = tf.reduce_mean(tf.random_normal((1000,), mean=t_init))
states, kernel_results = tfp.mcmc.sample_chain(
num_results=n_samples,
current_state=[q_mu, q_t],
kernel=hmc_kernel,
num_burnin_steps=num_burnin_steps,
)
# Initialize all constructed variables.
init_op = tf.global_variables_initializer()
# Run the inference using HMC to generate samples
with tf.Session() as sess:
init_op.run()
states_, results_ = sess.run([states, kernel_results])
mu_samples, t_samples = (states_[0], states_[1])
return mu_samples, t_samples
def visualize(samples, mu_grid, sigma_grid):
fig, ax = plt.subplots(1, 1, figsize=(6, 5))
ax.scatter(samples['mu'], samples['sigma'], s=5, lw=0, c='black')
ax.set_xlim(mu_grid[0], mu_grid[-1])
ax.set_ylim(sigma_grid[0], sigma_grid[-1])
ax.set_title('tfp and Edward2')
ax.set_xlabel('$\mu$')
ax.set_ylabel('$\sigma$')
plt.savefig('tfp-edward2.pdf')
plt.savefig('tfp-edward2.png')
def main():
np.random.seed(0)
tf.set_random_seed(0)
# Generate pseudodata from draws from a single normal distribution
dist_mean = 0.0
dist_std = 0.5
n_events = 5000
toy_data = scipy.stats.norm.rvs(dist_mean, dist_std, size=n_events)
mu_samples, t_samples = generate_samples(toy_data, n_events)
samples = pd.DataFrame({'mu': mu_samples, 'sigma': np.exp(t_samples)})
n_grid = 50
mu_grid = np.linspace(*np.percentile(mu_samples, (0.5, 99.5)), n_grid)
sigma_grid = np.linspace(*np.exp(np.percentile(t_samples, (0.5, 99.5))), n_grid)
visualize(samples, mu_grid, sigma_grid)
if __name__ == '__main__':
main()
爱德华的例子
由于 Edward
已弃用并且需要旧版本的 TensorFlow,因此可以为以下示例创建专用虚拟环境
$ python3 --version
Python 3.6.8
$ python3 -m venv edward
$ source edward/bin/activate
(edward) $ pip3 install --upgrade pip setuptools wheel
(edward) $ cat edward.txt
tensorflow==1.7
edward~=1.3
scipy~=1.2
pandas~=0.24
matplotlib~=3.0
(edward) $ pip3 install -r edward.txt
我有一个 非常 简单的最小工作示例,它使用哈密顿量 Monte Carlo 和爱德华 edward_old.py
#!/usr/bin/env python3
import numpy as np
import scipy.stats
import tensorflow as tf
import edward as ed
import pandas as pd
import matplotlib.pyplot as plt
def generate_samples(data, n_samples):
# Pick initial point for MCMC chains based on the data
low, med, high = np.percentile(data, (16, 50, 84))
mu_init = np.float32(med)
t_init = np.float32(np.log(0.5 * (high - low)))
# Build a very simple model
mu = ed.models.Uniform(-1.0, 1.0)
t = ed.models.Uniform(*np.log((0.05, 1.0), dtype=np.float32))
X = ed.models.Normal(
loc=tf.fill(data.shape, mu), scale=tf.fill(data.shape, tf.exp(t))
)
# Emperical samples of a sclar
q_mu = ed.models.Empirical(params=tf.Variable(tf.fill((n_samples,), mu_init)))
q_t = ed.models.Empirical(params=tf.Variable(tf.fill((n_samples,), t_init)))
# Run inference using HMC to generate samples.
with tf.Session() as sess:
inference = ed.HMC({mu: q_mu, t: q_t}, data={X: data})
inference.run(step_size=0.01, n_steps=10)
mu_samples, t_samples = sess.run([q_mu.params, q_t.params])
return mu_samples, t_samples
def visualize(samples, mu_grid, sigma_grid):
fig, ax = plt.subplots(1, 1, figsize=(6, 5))
ax.scatter(samples['mu'], samples['sigma'], s=5, lw=0, c='black')
ax.set_xlim(mu_grid[0], mu_grid[-1])
ax.set_ylim(sigma_grid[0], sigma_grid[-1])
ax.set_title('Edward')
ax.set_xlabel('$\mu$')
ax.set_ylabel('$\sigma$')
plt.savefig('edward_old.pdf')
def main():
np.random.seed(0)
tf.set_random_seed(0)
# Generate pseudodata from draws from a single normal distribution
dist_mean = 0.0
dist_std = 0.5
n_events = 5000
toy_data = scipy.stats.norm.rvs(dist_mean, dist_std, size=n_events)
mu_samples, t_samples = generate_samples(toy_data, n_events)
samples = pd.DataFrame({'mu': mu_samples, 'sigma': np.exp(t_samples)})
n_grid = 50
mu_grid = np.linspace(*np.percentile(mu_samples, (0.5, 99.5)), n_grid)
sigma_grid = np.linspace(*np.exp(np.percentile(t_samples, (0.5, 99.5))), n_grid)
visualize(samples, mu_grid, sigma_grid)
if __name__ == '__main__':
main()
通过
产生下图(edward) $ python3 edward_old.py
Edward2 示例
但是,当我尝试在以下环境中使用 TensorFlow Probability and Edward2 复制它时
$ python3 --version
Python 3.6.8
$ python3 -m venv tfp-edward2
$ source tfp-edward2/bin/activate
(tfp-edward2) $ pip3 install --upgrade pip setuptools wheel
(tfp-edward2) $ cat tfp-edward2.txt
tensorflow~=1.13
tensorflow-probability~=0.6
scipy~=1.2
pandas~=0.24
matplotlib~=3.0
(tfp-edward2) $ pip3 install -r tfp-edward2.txt
以及以下来自 edward_old.py
的 generate_samples
在名为 edward2.py
#!/usr/bin/env python3
import numpy as np
import scipy.stats
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow_probability import edward2 as ed
import pandas as pd
import matplotlib.pyplot as plt
def generate_samples(data, n_samples):
# Pick initial point for MCMC chains based on the data
low, med, high = np.percentile(data, (16, 50, 84))
mu_init = np.float32(med)
t_init = np.float32(np.log(0.5 * (high - low)))
def model(data_shape):
mu = ed.Uniform(
low=tf.fill(data_shape, -1.0), high=tf.fill(data_shape, 1.0), name="mu"
)
t = ed.Uniform(
low=tf.log(tf.fill(data_shape, 0.05)),
high=tf.log(tf.fill(data_shape, 1.0)),
name="t",
)
x = ed.Normal(loc=mu, scale=tf.exp(t), name="x")
return x
log_joint = ed.make_log_joint_fn(model)
def target_log_prob_fn(mu, t):
"""Target log-probability as a function of states."""
return log_joint(data.shape, mu=mu, t=t, x=data)
step_size = tf.get_variable(
name='step_size',
initializer=0.01,
use_resource=True, # For TFE compatibility
trainable=False,
)
num_burnin_steps = 1000
hmc_kernel = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
num_leapfrog_steps=5,
step_size=step_size,
step_size_update_fn=tfp.mcmc.make_simple_step_size_update_policy(
num_adaptation_steps=int(num_burnin_steps * 0.8)
),
)
# How should these be done?
q_mu = tf.random_normal(data.shape, mean=mu_init)
q_t = tf.random_normal(data.shape, mean=t_init)
states, kernel_results = tfp.mcmc.sample_chain(
num_results=n_samples,
current_state=[q_mu, q_t],
kernel=hmc_kernel,
num_burnin_steps=num_burnin_steps,
)
# Initialize all constructed variables.
init_op = tf.global_variables_initializer()
# Run the inference using HMC to generate samples
with tf.Session() as sess:
init_op.run()
states_, results_ = sess.run([states, kernel_results])
mu_samples, t_samples = states_[0][0], states_[1][0]
return mu_samples, t_samples
运行
(tfp-edward2) $ python3 edward2.py
表明存在一些明显的问题。我认为我没有正确地制定 ed.models.Empirical
的等价物,所以如果有关于那个或其他任何我做错的想法,那就太好了。
我已经尝试按照“Upgrading from Edward to Edward2”示例进行操作,但我对它们的理解还不足以从 deep_exponential_family
模型中使用的示例转移到此示例.
我为自己制造的问题完全弄乱了分布的形状。一开始我没能正确理解的是,我的 tfp.mcmc.sample_chain
的 current_state
应该是代表链初始位置的标量 (shape==()
)。一旦我意识到这一点,就很明显这些位置 q_mu
和 q_t
的形状完全错误,应该是根据数据
q_mu = tf.reduce_mean(tf.random_normal((1000,), mean=mu_init))
q_t = tf.reduce_mean(tf.random_normal((1000,), mean=t_init))
因为这些值是标量,所以我也一直在错误地创建模型的形状。我一直在创建与我的数据形状相同的随机变量样本,错误地认为这只是将 x
的形状移动到 mu
和 t
的形状。当然 mu
和 t
是来自它们各自均匀分布的标量随机变量,然后是 x
正态分布的参数,从中抽取 data.shape
个样本.
def model(data_shape):
mu = ed.Uniform(low=-1.0, high=1.0, name="mu")
t = ed.Uniform(low=tf.log(0.05), high=tf.log(1.0), name="t")
x = ed.Normal(
loc=tf.fill(data_shape, mu), scale=tf.fill(data_shape, tf.exp(t)), name="x"
)
return x
一旦完成,唯一剩下要做的就是现在正确访问状态
with tf.Session() as sess:
init_op.run()
states_, results_ = sess.run([states, kernel_results])
mu_samples, t_samples = (states_[0], states_[1])
并生成下面的图像
(tfp-edward2) $ python3 edward2.py
这与使用 Edward
的原始版本非常匹配。
完全更正的脚本如下
#!/usr/bin/env python3
import numpy as np
import scipy.stats
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow_probability import edward2 as ed
import pandas as pd
import matplotlib.pyplot as plt
def generate_samples(data, n_samples):
# Pick initial point for MCMC chains based on the data
low, med, high = np.percentile(data, (16, 50, 84))
mu_init = np.float32(med)
t_init = np.float32(np.log(0.5 * (high - low)))
def model(data_shape):
mu = ed.Uniform(low=-1.0, high=1.0, name="mu")
t = ed.Uniform(low=tf.log(0.05), high=tf.log(1.0), name="t")
x = ed.Normal(
loc=tf.fill(data_shape, mu), scale=tf.fill(data_shape, tf.exp(t)), name="x"
)
return x
log_joint = ed.make_log_joint_fn(model)
def target_log_prob_fn(mu, t):
"""Target log-probability as a function of states."""
return log_joint(data.shape, mu=mu, t=t, x=data)
step_size = tf.get_variable(
name='step_size',
initializer=0.01,
use_resource=True, # For TFE compatibility
trainable=False,
)
num_burnin_steps = 1000
hmc_kernel = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
num_leapfrog_steps=5,
step_size=step_size,
step_size_update_fn=tfp.mcmc.make_simple_step_size_update_policy(
num_adaptation_steps=int(num_burnin_steps * 0.8)
),
)
# Initial states of chains
q_mu = tf.reduce_mean(tf.random_normal((1000,), mean=mu_init))
q_t = tf.reduce_mean(tf.random_normal((1000,), mean=t_init))
states, kernel_results = tfp.mcmc.sample_chain(
num_results=n_samples,
current_state=[q_mu, q_t],
kernel=hmc_kernel,
num_burnin_steps=num_burnin_steps,
)
# Initialize all constructed variables.
init_op = tf.global_variables_initializer()
# Run the inference using HMC to generate samples
with tf.Session() as sess:
init_op.run()
states_, results_ = sess.run([states, kernel_results])
mu_samples, t_samples = (states_[0], states_[1])
return mu_samples, t_samples
def visualize(samples, mu_grid, sigma_grid):
fig, ax = plt.subplots(1, 1, figsize=(6, 5))
ax.scatter(samples['mu'], samples['sigma'], s=5, lw=0, c='black')
ax.set_xlim(mu_grid[0], mu_grid[-1])
ax.set_ylim(sigma_grid[0], sigma_grid[-1])
ax.set_title('tfp and Edward2')
ax.set_xlabel('$\mu$')
ax.set_ylabel('$\sigma$')
plt.savefig('tfp-edward2.pdf')
plt.savefig('tfp-edward2.png')
def main():
np.random.seed(0)
tf.set_random_seed(0)
# Generate pseudodata from draws from a single normal distribution
dist_mean = 0.0
dist_std = 0.5
n_events = 5000
toy_data = scipy.stats.norm.rvs(dist_mean, dist_std, size=n_events)
mu_samples, t_samples = generate_samples(toy_data, n_events)
samples = pd.DataFrame({'mu': mu_samples, 'sigma': np.exp(t_samples)})
n_grid = 50
mu_grid = np.linspace(*np.percentile(mu_samples, (0.5, 99.5)), n_grid)
sigma_grid = np.linspace(*np.exp(np.percentile(t_samples, (0.5, 99.5))), n_grid)
visualize(samples, mu_grid, sigma_grid)
if __name__ == '__main__':
main()