Tensorflow 概率 returns 不稳定的预测

Tensorflow Probability returns unstable Predictions

我正在使用 Tensorflow 概率模型。当然是概率结果,误差的导数不会为零(否则模型将是确定性的)。预测不稳定,因为我们在损失的导数中有一个范围,比方说,在凸优化中,例如从 1.2 到 0.2。

每次训练模型时,此间隔都会生成不同的预测。有时我得到一个很好的拟合(红色=真实,蓝色线=预测+2标准偏差和-2标准偏差):

有时不会,具有相同的超参数:

有时镜像:

出于商业目的,这是一个很大的问题,因为预计预测会呈现稳定的输出。

代码如下:

import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
np.random.seed(42)
dataframe = pd.read_csv('Apple_Data_300.csv').ix[0:800,:]
dataframe.head()

plt.plot(range(0,dataframe.shape[0]),dataframe.iloc[:,1])

x1=np.array(dataframe.iloc[:,1]+np.random.randn(dataframe.shape[0])).astype(np.float32).reshape(-1,1)

y=np.array(dataframe.iloc[:,1]).T.astype(np.float32).reshape(-1,1)

tfd = tfp.distributions

model = tf.keras.Sequential([
  tf.keras.layers.Dense(1,kernel_initializer='glorot_uniform'),
  tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
  tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
  tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1))
])
negloglik = lambda x, rv_x: -rv_x.log_prob(x)

model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.0001), loss=negloglik)

model.fit(x1,y, epochs=500, verbose=True)

yhat = model(x1)
mean = yhat.mean()

init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    mm = sess.run(mean)    
    mean = yhat.mean()
    stddev = yhat.stddev()
    mean_plus_2_std = sess.run(mean - 2. * stddev)
    mean_minus_2_std = sess.run(mean + 2. * stddev)


plt.figure(figsize=(8,6))
plt.plot(y,color='red',linewidth=1)
#plt.plot(mm)
plt.plot(mean_minus_2_std,color='blue',linewidth=1)
plt.plot(mean_plus_2_std,color='blue',linewidth=1)

损失:

Epoch 498/500
801/801 [==============================] - 0s 32us/sample - loss: 2.4169
Epoch 499/500
801/801 [==============================] - 0s 30us/sample - loss: 2.4078
Epoch 500/500
801/801 [==============================] - 0s 31us/sample - loss: 2.3944

有没有办法控制概率模型的预测输出?损失停止在 1.42,甚至降低了学习率并增加了训练周期。我在这里错过了什么?

回答后的工作代码:

init = tf.global_variables_initializer()

with tf.Session() as sess:

    model = tf.keras.Sequential([
      tf.keras.layers.Dense(1,kernel_initializer='glorot_uniform'),
      tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1))
    ])
    negloglik = lambda x, rv_x: -rv_x.log_prob(x)

    model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.0001), loss=negloglik)

    model.fit(x1,y, epochs=500, verbose=True, batch_size=16)

    yhat = model(x1)
    mean = yhat.mean()

    sess.run(init)
    mm = sess.run(mean)    
    mean = yhat.mean()
    stddev = yhat.stddev()
    mean_plus_2_std = sess.run(mean - 3. * stddev)
    mean_minus_2_std = sess.run(mean + 3. * stddev)

你运行tf.global_variables_initializer来晚了吗?

我在 的回答中找到了这个:

Variable initializers must be run explicitly before other ops in your model can be run. The easiest way to do that is to add an op that runs all the variable initializers, and run that op before using the model.