如何在 Tensorboard 中显示超过 10 张图像?
How to display more than 10 images in Tensorboard?
我注意到无论我将多少图像保存到 tensorboard 日志文件中,tensorboard 只会显示其中的 10 个(每个标签)。
How can we increase the number of images or at least select which ones are displayed?
为了重现我的意思 运行 在 MCVE 之后:
import torch
from torch.utils.tensorboard import SummaryWriter
tb = SummaryWriter(comment="test")
for k in range(100):
# create an image with some funny pattern
b = [n for (n, c) in enumerate(bin(k)) if c == '1']
img = torch.zeros((1,10,10))
img[0, b, :] = 0.5
img =img + img.permute([0, 2, 1])
# add the image to the tensorboard file
tb.add_image(tag="test", img_tensor=img, global_step=k)
这将创建一个用于保存数据的文件夹 runs
。从同一文件夹执行 tensorboard --logdir runs
,打开浏览器并转到 localhost:6006
(或将 6006
替换为启动后恰好显示的任何端口 tensorboard)。然后转到名为 "images" 的选项卡并将滑块移动到灰度图像上方。
在我的例子中,它只显示步骤
中的图像
k = 3, 20, 24, 32, 37, 49, 52, 53, 67, 78
这甚至不是一个很好的均匀间距,但看起来很随意。我更愿意
- 查看我保存的不止 10 张图片,并且
- 显示的每个图像之间的步数间距更均匀。
我怎样才能做到这一点?
编辑: 我刚找到选项 --samples_per_plugin
并尝试了 tensorboard --logdir runs --samples_per_plugin "images=100"
。这确实增加了图像的数量,但它只显示了步骤 k = 0,1,2,3....,78
的图像,但是 78
.
上面的 none
您可能需要等待更长的时间才能加载所有数据,但这确实是正确的解决方案,请参见--help
:
--samples_per_plugin: An optional comma separated list of plugin_name=num_samples pairs to explicitly specify how many samples
to keep per tag for that plugin. For unspecified plugins, TensorBoard
randomly downsamples logged summaries to reasonable values to prevent
out-of-memory errors for long running jobs. This flag allows fine
control over that downsampling. Note that 0 means keep all samples of
that type. For instance, "scalars=500,images=0" keeps 500 scalars and
all images. Most users should not need to set this flag. (default: '')
关于随机样本:这也是事实,它具有某种随机性,来自 FAQ:
Is my data being downsampled? Am I really seeing all the data?
TensorBoard uses reservoir sampling to downsample your data so that it
can be loaded into RAM. You can modify the number of elements it will
keep per tag in tensorboard/backend/application.py.
我注意到无论我将多少图像保存到 tensorboard 日志文件中,tensorboard 只会显示其中的 10 个(每个标签)。
How can we increase the number of images or at least select which ones are displayed?
为了重现我的意思 运行 在 MCVE 之后:
import torch
from torch.utils.tensorboard import SummaryWriter
tb = SummaryWriter(comment="test")
for k in range(100):
# create an image with some funny pattern
b = [n for (n, c) in enumerate(bin(k)) if c == '1']
img = torch.zeros((1,10,10))
img[0, b, :] = 0.5
img =img + img.permute([0, 2, 1])
# add the image to the tensorboard file
tb.add_image(tag="test", img_tensor=img, global_step=k)
这将创建一个用于保存数据的文件夹 runs
。从同一文件夹执行 tensorboard --logdir runs
,打开浏览器并转到 localhost:6006
(或将 6006
替换为启动后恰好显示的任何端口 tensorboard)。然后转到名为 "images" 的选项卡并将滑块移动到灰度图像上方。
在我的例子中,它只显示步骤
中的图像k = 3, 20, 24, 32, 37, 49, 52, 53, 67, 78
这甚至不是一个很好的均匀间距,但看起来很随意。我更愿意
- 查看我保存的不止 10 张图片,并且
- 显示的每个图像之间的步数间距更均匀。
我怎样才能做到这一点?
编辑: 我刚找到选项 --samples_per_plugin
并尝试了 tensorboard --logdir runs --samples_per_plugin "images=100"
。这确实增加了图像的数量,但它只显示了步骤 k = 0,1,2,3....,78
的图像,但是 78
.
您可能需要等待更长的时间才能加载所有数据,但这确实是正确的解决方案,请参见--help
:
--samples_per_plugin: An optional comma separated list of plugin_name=num_samples pairs to explicitly specify how many samples to keep per tag for that plugin. For unspecified plugins, TensorBoard randomly downsamples logged summaries to reasonable values to prevent out-of-memory errors for long running jobs. This flag allows fine control over that downsampling. Note that 0 means keep all samples of that type. For instance, "scalars=500,images=0" keeps 500 scalars and all images. Most users should not need to set this flag. (default: '')
关于随机样本:这也是事实,它具有某种随机性,来自 FAQ:
Is my data being downsampled? Am I really seeing all the data?
TensorBoard uses reservoir sampling to downsample your data so that it can be loaded into RAM. You can modify the number of elements it will keep per tag in tensorboard/backend/application.py.