让tensorboard发挥作用的关键在哪里?
Where is the key to make tensorboard work?
我是 tensorboard 的新手,正在按照 tutorial 学习它的使用,进展顺利,tensorboard 按预期工作。
参考那个教程,我自己写了代码用jupyter notebook训练逻辑与模型
%load_ext tensorboard
import datetime
log_folder = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
import tensorflow as tf
import numpy as np
x_train = np.asarray([[0, 0],[0, 1],[1, 0],[1, 1]], np.float32)
y_train = np.asarray([0, 0, 0, 1], np.float32)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
def custom_loss(y,a):
return -(y*tf.math.log(a) + (1-y)*tf.math.log(1-a))
model.compile(loss=custom_loss,
optimizer='SGD',
metrics=['accuracy'])
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_folder, histogram_freq=1)
model.fit(x_train, y_train, epochs=2000, verbose=0,
callbacks=[tensorboard_callback])
培训进展顺利,需要一些改进。
然而,tensorboard 什么也没显示
%tensorboard --logdir log_folder
让tensorboard发挥作用的关键在哪里?
您只是使用了 ipython 魔法错误。您需要在变量名前加上一个美元符号(参见 How to pass a variable to magic ´run´ function in IPython)。
%tensorboard --logdir $log_folder
为了进一步探索,请假装你在未来工作(至少就日期而言),然后添加一个像这样的单元格
log_folder_future = "logs/fit/" + (datetime.datetime.now() - datetime.timedelta(days=1)).strftime("%Y%m%d-%H%M%S")
up_dir = './' + '/'.join(log_folder_future.split('/')[:-1])
model.compile(loss=custom_loss,
optimizer='SGD',
metrics=['accuracy'])
tensorboard_callback_future = tf.keras.callbacks.TensorBoard(log_folder_future, histogram_freq=1)
model.fit(x_train, y_train, epochs=2500, verbose=0,
callbacks=[tensorboard_callback_future])
然后这样调用
%tensorboard --logdir $up_dir
最后得到这样的东西
有关 tensorboard 目录结构和多次运行的更多信息,请参阅此页面
https://github.com/tensorflow/tensorboard/blob/master/README.md#runs-comparing-different-executions-of-your-model
我是 tensorboard 的新手,正在按照 tutorial 学习它的使用,进展顺利,tensorboard 按预期工作。
参考那个教程,我自己写了代码用jupyter notebook训练逻辑与模型
%load_ext tensorboard
import datetime
log_folder = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
import tensorflow as tf
import numpy as np
x_train = np.asarray([[0, 0],[0, 1],[1, 0],[1, 1]], np.float32)
y_train = np.asarray([0, 0, 0, 1], np.float32)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
def custom_loss(y,a):
return -(y*tf.math.log(a) + (1-y)*tf.math.log(1-a))
model.compile(loss=custom_loss,
optimizer='SGD',
metrics=['accuracy'])
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_folder, histogram_freq=1)
model.fit(x_train, y_train, epochs=2000, verbose=0,
callbacks=[tensorboard_callback])
培训进展顺利,需要一些改进。
然而,tensorboard 什么也没显示
%tensorboard --logdir log_folder
让tensorboard发挥作用的关键在哪里?
您只是使用了 ipython 魔法错误。您需要在变量名前加上一个美元符号(参见 How to pass a variable to magic ´run´ function in IPython)。
%tensorboard --logdir $log_folder
为了进一步探索,请假装你在未来工作(至少就日期而言),然后添加一个像这样的单元格
log_folder_future = "logs/fit/" + (datetime.datetime.now() - datetime.timedelta(days=1)).strftime("%Y%m%d-%H%M%S")
up_dir = './' + '/'.join(log_folder_future.split('/')[:-1])
model.compile(loss=custom_loss,
optimizer='SGD',
metrics=['accuracy'])
tensorboard_callback_future = tf.keras.callbacks.TensorBoard(log_folder_future, histogram_freq=1)
model.fit(x_train, y_train, epochs=2500, verbose=0,
callbacks=[tensorboard_callback_future])
然后这样调用
%tensorboard --logdir $up_dir
最后得到这样的东西
有关 tensorboard 目录结构和多次运行的更多信息,请参阅此页面 https://github.com/tensorflow/tensorboard/blob/master/README.md#runs-comparing-different-executions-of-your-model