pymc3 是否具有与 scikit-learn 中的预测方法等效的方法?

Does pymc3 have an equivalent for the predict method in scikit-learn?

我正在使用 PyMC3 学习统计反思课程。在第 4 章的末尾,他们要求提供不在原始 (!Kung) 数据集中的数据点的各个值的 HDI。在 PyMC3 中可以这样做吗?

在 scikit-learn 中,您有 fit()predict(),并且您能够预测全新输入的输出。

使用 PyMC3,您可以 sample() 获取您的踪迹,并且您可以要求进行后验预测检查,但我无法将我感兴趣的值的任何参数传递给它。我确实设法使用共享的 theano 变量以迂回的方式做到了这一点,并且还手动做到了。

编辑:我在最后添加了一个 pm.Data()pm.set_data() 示例。我认为这可能是答案,但在将其标记为已回答之前,我正在等待其他人确认。


这是我所做的。

weight_s为标准化体重数据。标准化是用这个函数完成的:

def standardize(array, reference=None):
    if reference is None:
        reference = array
    return (array - reference.mean()) / reference.std()

这是 PyMC3 模型:

import pymc3 as pm

with pm.Model() as m_adult:
    a = pm.Normal("α", mu=155, sd=20)
    b = pm.Lognormal("β", mu=0, sd=1)
    mu =  pm.Deterministic("μ", a + b * adults.weight_s)
    sigma = pm.Uniform("σ", 0, 50)
    height = pm.Normal("height", mu=mu, sd=sigma, observed=adults.height)
    trace_adult = pm.sample()

数据是这样的(注意 HDI 是 89%):

height_pred = pm.fast_sample_posterior_predictive(trace_adult, model=m_adult)["height"]

fig, ax = plt.subplots()
ax.plot(adults.weight, adults.height, ".")
ax.plot(adults.weight, trace_adult.μ.mean(axis=0), color="black")
az.plot_hdi(adults.weight, trace_adult.μ, ax=ax, color="black")
az.plot_hdi(adults.weight, height_pred, ax=ax)
ax.set(xlabel="weight", ylabel="height")
fig.tight_layout()

先给大家看一下手动版:

missing_weights = np.array([45, 40, 65, 31, 53])

expected_height = np.array([
    (trace_adult.α + trace_adult.β * standardize(weight, adults.weight)).mean()
    for weight in missing_weights
])
hdis = np.array([
    az.hdi(np.random.normal(
        trace_adult.α + trace_adult.β * standardize(weight, adults.weight),
        trace_adult.σ,
    )) for weight in missing_weights
])

data = np.vstack((missing_weights, expected_height, hdis.T)).T
missing_df = pd.DataFrame(, columns=["weight", "expected_height", "hdi_lower", "hdi_upper"])
print(missing_df)

这给了我们:

   weight  expected_height   hdi_lower   hdi_upper
0      45       154.603176  146.981285  163.149938
1      40       150.105295  142.095583  158.474277
2      65       172.594698  164.401102  180.786641
3      31       142.009110  134.163952  150.233028
4      53       161.799785  153.881956  170.209779

如果您看一下图表,这些数字就很有意义。

现在是共享变量。我们可以这样修改模型:

from theano import shared

shared_weights_s = shared(adults.weight_s.values)
with pm.Model() as m_adult:
    a = pm.Normal("α", mu=155, sd=20)
    b = pm.Lognormal("β", mu=0, sd=1)
    mu =  pm.Deterministic("μ", a + b * shared_weights_s)
    sigma = pm.Uniform("σ", 0, 50)
    height = pm.Normal("height", mu=mu, sd=sigma, observed=adults.height)
    trace_adult = pm.sample()

现在,我们有三个选择 shared_weights 的新值:

一对一案例:

missing_weights = np.array([45, 40, 65, 31, 53])

rows = []

for weight in missing_weights:
    row = [weight]
    shared_weights_s.set_value(standardize(np.array([weight]), adults.weight))
    height_pred_single = pm.fast_sample_posterior_predictive(trace_adult, model=m_adult)["height"]
    row.append(height_pred_single.mean())
    row.extend(list(az.hdi(height_pred_single).mean(axis=0)))
    rows.append(row)

missing_df = pd.DataFrame(rows, columns=["weight", "expected_height", "hdi_lower", "hdi_upper"])
print(missing_df)

为他们所有人做这件事给了我们:

   weight  expected_height   hdi_lower   hdi_upper
0      45       154.604520  146.485327  162.713345
1      40       150.113378  142.001151  158.263953
2      65       172.580212  164.357970  180.843184
3      31       142.010954  133.786200  150.142080
4      53       161.792962  153.651266  169.926615

您可以一次完成所有操作:

missing_weights = np.array([45, 40, 65, 31, 53])

shared_weights_s.set_value(standardize(missing_weights, adults.weight))
height_pred_replace = pm.fast_sample_posterior_predictive(trace_adult, model=m_adult)["height"]
missing_df = pd.DataFrame(missing_weights, columns=["weight"])
missing_df["expected_height"] = height_pred_replace.mean(axis=0)
missing_df[["hdi_lower", "hdi_upper"]] = az.hdi(height_pred_replace)
print(missing_df)

这给了我们:

   weight  expected_height   hdi_lower   hdi_upper
0      45       154.578096  147.066342  163.069805
1      40       150.042506  141.561599  158.120596
2      65       172.568430  164.079591  180.536870
3      31       142.080048  134.173959  150.345556
4      53       161.830472  153.327694  169.717058

最后我们可以把它加到前面的共享权重变量的末尾,取尾:

missing_weights = np.array([45, 40, 65, 31, 53])

shared_weights_s.set_value(np.append(adults.weight_s.values, standardize(missing_weights, adults.weight)))
height_pred_append = pm.fast_sample_posterior_predictive(trace_adult, model=m_adult)["height"]
missing_df = pd.DataFrame(missing_weights, columns=["weight"])
missing_df["expected_height"] = height_pred_append.mean(axis=0)[-len(missing_weights):]
missing_df[["hdi_lower", "hdi_upper"]] = az.hdi(height_pred_append)[-len(missing_weights):]
print(missing_df)

这给了我们:

   weight  expected_height   hdi_lower   hdi_upper
0      45       154.640287  146.093825  162.477313
1      40       150.088713  142.168331  158.314038
2      65       172.633776  164.086280  180.483805
3      31       142.019331  133.516545  150.491937
4      53       161.880175  153.530868  169.771088

如您所见,所有这些方法最终都会给出相同的结果。有 official/best 的方法吗?不设置全局共享变量,不修改就可以吗? PyMC3 是否有这样的功能,或者将来可能添加的功能? (如果它足够简单,我可以为此提出拉取请求;我还是 PyMC3 的新手。)


编辑:我想我找到了答案:使用 pm.Data().

with pm.Model() as m_adult:
    weight_s = pm.Data("weight_s", adults.weight_s.values)
    a = pm.Normal("α", mu=155, sd=20)
    b = pm.Lognormal("β", mu=0, sd=1)
    mu =  pm.Deterministic("μ", a + b * weight_s)
    sigma = pm.Uniform("σ", 0, 50)
    height = pm.Normal("height", mu=mu, sd=sigma, observed=adults.height)
    trace_adult = pm.sample()

然后,在尝试时,我们 pm.set_data():

missing_weights = np.array([45, 40, 65, 31, 53])

with m_adult:
    pm.set_data({"weight_s": standardize(missing_weights, adults.weight)})
    height_pred_data = pm.fast_sample_posterior_predictive(trace_adult)["height"]

missing_df = pd.DataFrame(missing_weights, columns=["weight"])
missing_df["expected_height"] = height_pred_data.mean(axis=0)
missing_df[["hdi_lower", "hdi_upper"]] = az.hdi(height_pred_data)
print(missing_df)

给出:

   weight  expected_height   hdi_lower   hdi_upper
0      45       154.584063  145.828088  162.512174
1      40       150.184853  142.272258  158.451555
2      65       172.662069  164.522903  180.803430
3      31       141.949137  133.310865  149.811098
4      53       161.719867  153.848599  169.638495

我想我找到了答案:使用 pm.Data()

with pm.Model() as m_adult:
    weight_s = pm.Data("weight_s", adults.weight_s.values)
    a = pm.Normal("α", mu=155, sd=20)
    b = pm.Lognormal("β", mu=0, sd=1)
    mu =  pm.Deterministic("μ", a + b * weight_s)
    sigma = pm.Uniform("σ", 0, 50)
    height = pm.Normal("height", mu=mu, sd=sigma, observed=adults.height)
    trace_adult = pm.sample()

然后,在尝试时,我们 pm.set_data():

missing_weights = np.array([45, 40, 65, 31, 53])

with m_adult:
    pm.set_data({"weight_s": standardize(missing_weights, adults.weight)})
    height_pred_data = pm.fast_sample_posterior_predictive(trace_adult)["height"]

missing_df = pd.DataFrame(missing_weights, columns=["weight"])
missing_df["expected_height"] = height_pred_data.mean(axis=0)
missing_df[["hdi_lower", "hdi_upper"]] = az.hdi(height_pred_data)
print(missing_df)

给出:

   weight  expected_height   hdi_lower   hdi_upper
0      45       154.584063  145.828088  162.512174
1      40       150.184853  142.272258  158.451555
2      65       172.662069  164.522903  180.803430
3      31       141.949137  133.310865  149.811098
4      53       161.719867  153.848599  169.638495