计算每次迭代和时间的损失(MSE)Tensorflow

Computing the loss (MSE) for every iteration and time Tensorflow

我想使用 Tensorboard 绘制给定时间范围(x 轴)内每次迭代的均方误差(y 轴),比如 5 分钟。

但是,我只能绘制每个时期给定的 MSE,并在 5 分钟时设置回调。然而,这并没有解决我的问题。

我曾尝试在互联网上寻找一些解决方案,以了解如何在执行 model.fit 时设置最大迭代次数而不是历元数,但没有成功。我知道迭代次数是完成一个时期所需的批次数,但由于我想调整 batch_size,我更喜欢使用迭代次数。

我的代码目前如下所示:

input_size = len(train_dataset.keys())
output_size = 10
hidden_layer_size = 250
n_epochs = 3

weights_initializer = keras.initializers.GlorotUniform()

#A function that trains and validates the model and returns the MSE
def train_val_model(run_dir, hparams):
    model = keras.models.Sequential([
            #Layer to be used as an entry point into a Network
            keras.layers.InputLayer(input_shape=[len(train_dataset.keys())]),
            #Dense layer 1
            keras.layers.Dense(hidden_layer_size, activation='relu', 
                               kernel_initializer = weights_initializer,
                               name='Layer_1'),
            #Dense layer 2
            keras.layers.Dense(hidden_layer_size, activation='relu', 
                               kernel_initializer = weights_initializer,
                               name='Layer_2'),
            #activation function is linear since we are doing regression
            keras.layers.Dense(output_size, activation='linear', name='Output_layer')
                                ])
    
    #Use the stochastic gradient descent optimizer but change batch_size to get BSG, SGD or MiniSGD
    optimizer = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.0,
                                        nesterov=False)
    
    #Compiling the model
    model.compile(optimizer=optimizer, 
                  loss='mean_squared_error', #Computes the mean of squares of errors between labels and predictions
                  metrics=['mean_squared_error']) #Computes the mean squared error between y_true and y_pred
    
    # initialize TimeStopping callback 
    time_stopping_callback = tfa.callbacks.TimeStopping(seconds=5*60, verbose=1)
    
    #Training the network
    history = model.fit(normed_train_data, train_labels, 
         epochs=n_epochs,
         batch_size=hparams['batch_size'], 
         verbose=1,
         #validation_split=0.2,
         callbacks=[tf.keras.callbacks.TensorBoard(run_dir + "/Keras"), time_stopping_callback])
    
    return history

#train_val_model("logs/sample", {'batch_size': len(normed_train_data)})
train_val_model("logs/sample1", {'batch_size': 1})
%tensorboard --logdir_spec=BSG:logs/sample,SGD:logs/sample1

导致:

所需的输出应如下所示:

你不能每次迭代都这样做的原因是损失是在每个时期结束时计算的。如果你想调整批量大小,运行 为一组 epochs 和评估。从 16 开始,跳到 2 的幂,看看你能把网络的力量推到多大。但是,通常据说更大的批量大小可以提高性能,但仅仅关注它并没有那么重要。先关注网络中的其他事情。

答案其实很简单。

tf.keras.callbacks.TensorBoard 有一个 update_freq 参数,允许您控制何时将损失和指标写入 tensorboard。标准是 epoch,但如果你想每 n 个批次写入 tensorboard,你可以将其更改为批次或整数。有关详细信息,请参阅文档:https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/TensorBoard