如何从tensorflow 2 summary writer读取数据
How to read data from tensorflow 2 summary writer
我在从 tensorflow 摘要编写器中读取数据时遇到问题。
我正在使用 tensorflow 网站示例中的编写器:https://www.tensorflow.org/tensorboard/migrate
import tensorflow as tf
from tensorboard.backend.event_processing.event_accumulator import EventAccumulator
writer = tf.summary.create_file_writer("/tmp/mylogs/eager")
# write to summary writer
with writer.as_default():
for step in range(100):
# other model code would go here
tf.summary.scalar("my_metric", 0.5, step=step)
writer.flush()
# read from summary writer
event_acc = EventAccumulator("/tmp/mylogs/eager")
event_acc.Reload()
event_acc.Tags()
产量:
'distributions': [],
'graph': False,
'histograms': [],
'images': [],
'meta_graph': False,
'run_metadata': [],
'scalars': [],
'tensors': ['my_metric']}```
如果我尝试获取张量数据:
import pandas as pd
pd.DataFrame(event_acc.Tensors('my_metric'))
我没有得到正确的值:
wall_time step tensor_proto
0 1.590743e+09 3 dtype: DT_FLOAT\ntensor_shape {\n}\ntensor_con...
1 1.590743e+09 20 dtype: DT_FLOAT\ntensor_shape {\n}\ntensor_con...
2 1.590743e+09 24 dtype: DT_FLOAT\ntensor_shape {\n}\ntensor_con...
3 1.590743e+09 32 dtype: DT_FLOAT\ntensor_shape {\n}\ntensor_con...
...
如何获取实际的汇总数据(对于 100 步,每步应该为 0.5)?
这是一个包含以上代码的 colab notebook:https://colab.research.google.com/drive/1RlgZrGD_vY-YcOBLF_sEPelmtVuygkqz?usp=sharing
您需要转换事件累加器中的张量值,存储为TensorProto
messages, into arrays, which you can do with tf.make_ndarray
:
pd.DataFrame([(w, s, tf.make_ndarray(t)) for w, s, t in event_acc.Tensors('my_metric')],
columns=['wall_time', 'step', 'tensor'])
为了避免部分步骤,我建议:
event_acc = EventAccumulator("/tmp/mylogs/eager", size_guidance={'tensors': 0})
我在从 tensorflow 摘要编写器中读取数据时遇到问题。
我正在使用 tensorflow 网站示例中的编写器:https://www.tensorflow.org/tensorboard/migrate
import tensorflow as tf
from tensorboard.backend.event_processing.event_accumulator import EventAccumulator
writer = tf.summary.create_file_writer("/tmp/mylogs/eager")
# write to summary writer
with writer.as_default():
for step in range(100):
# other model code would go here
tf.summary.scalar("my_metric", 0.5, step=step)
writer.flush()
# read from summary writer
event_acc = EventAccumulator("/tmp/mylogs/eager")
event_acc.Reload()
event_acc.Tags()
产量:
'distributions': [],
'graph': False,
'histograms': [],
'images': [],
'meta_graph': False,
'run_metadata': [],
'scalars': [],
'tensors': ['my_metric']}```
如果我尝试获取张量数据:
import pandas as pd
pd.DataFrame(event_acc.Tensors('my_metric'))
我没有得到正确的值:
wall_time step tensor_proto
0 1.590743e+09 3 dtype: DT_FLOAT\ntensor_shape {\n}\ntensor_con...
1 1.590743e+09 20 dtype: DT_FLOAT\ntensor_shape {\n}\ntensor_con...
2 1.590743e+09 24 dtype: DT_FLOAT\ntensor_shape {\n}\ntensor_con...
3 1.590743e+09 32 dtype: DT_FLOAT\ntensor_shape {\n}\ntensor_con...
...
如何获取实际的汇总数据(对于 100 步,每步应该为 0.5)?
这是一个包含以上代码的 colab notebook:https://colab.research.google.com/drive/1RlgZrGD_vY-YcOBLF_sEPelmtVuygkqz?usp=sharing
您需要转换事件累加器中的张量值,存储为TensorProto
messages, into arrays, which you can do with tf.make_ndarray
:
pd.DataFrame([(w, s, tf.make_ndarray(t)) for w, s, t in event_acc.Tensors('my_metric')],
columns=['wall_time', 'step', 'tensor'])
为了避免部分步骤,我建议:
event_acc = EventAccumulator("/tmp/mylogs/eager", size_guidance={'tensors': 0})