Google 开端 tensorflow.python.framework.errors.ResourceExhaustedError
Google Inception tensorflow.python.framework.errors.ResourceExhaustedError
当我尝试 运行 Google 的 Inception 模型循环遍历图像列表时,我在大约 100 张图像后遇到以下问题。似乎 运行ning 内存不足。我 运行 宁 CPU。还有其他人遇到过这个问题吗?
Traceback (most recent call last):
File "clean_dataset.py", line 33, in <module>
description, score = inception.run_inference_on_image(f.read())
File "/Volumes/EXPANSION/research/dcgan-transfer/data/classify_image.py", line 178, in run_inference_on_image
node_lookup = NodeLookup()
File "/Volumes/EXPANSION/research/dcgan-transfer/data/classify_image.py", line 83, in __init__
self.node_lookup = self.load(label_lookup_path, uid_lookup_path)
File "/Volumes/EXPANSION/research/dcgan-transfer/data/classify_image.py", line 112, in load
proto_as_ascii = tf.gfile.GFile(label_lookup_path).readlines()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 110, in readlines
self._prereadline_check()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 72, in _prereadline_check
compat.as_bytes(self.__name), 1024 * 512, status)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/errors.py", line 463, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors.ResourceExhaustedError: /tmp/imagenet/imagenet_2012_challenge_label_map_proto.pbtxt
real 6m32.403s
user 7m8.210s
sys 1m36.114s
问题是您不能简单地在自己的代码中导入原始 'classify_image.py'(https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify_image.py),尤其是当您将其放入一个巨大的循环中以 class 化数千张图像时'in batch mode'。
原代码看这里:
with tf.Session() as sess:
# Some useful tensors:
# 'softmax:0': A tensor containing the normalized prediction across
# 1000 labels.
# 'pool_3:0': A tensor containing the next-to-last layer containing 2048
# float description of the image.
# 'DecodeJpeg/contents:0': A tensor containing a string providing JPEG
# encoding of the image.
# Runs the softmax tensor by feeding the image_data as input to the graph.
softmax_tensor = sess.graph.get_tensor_by_name('softmax:0')
predictions = sess.run(softmax_tensor,
{'DecodeJpeg/contents:0': image_data})
predictions = np.squeeze(predictions)
# Creates node ID --> English string lookup.
node_lookup = NodeLookup()
top_k = predictions.argsort()[-FLAGS.num_top_predictions:][::-1]
for node_id in top_k:
human_string = node_lookup.id_to_string(node_id)
score = predictions[node_id]
print('%s (score = %.5f)' % (human_string, score))
从上面你可以看到,对于每个 classification 任务,它都会生成一个 Class 'NodeLookup' 的新实例,它从下面的文件中加载:
- label_lookup="imagenet_2012_challenge_label_map_proto.pbtxt"
- uid_lookup_path="imagenet_synset_to_human_label_map.txt"
所以这个实例会非常庞大,然后在你的代码循环中它会生成数百个这个 class 的实例,结果是 'tensorflow.python.framework.errors.ResourceExhaustedError'。
我的建议是编写一个新脚本并修改 'classify_image.py' 中的那些 classes 和函数,并避免实例化 NodeLookup class每个循环,只需实例化一次并在循环中使用它。像这样:
with tf.Session() as sess:
softmax_tensor = sess.graph.get_tensor_by_name('softmax:0')
print 'Making classifications:'
# Creates node ID --> English string lookup.
node_lookup = NodeLookup(label_lookup_path=self.Model_Save_Path + self.label_lookup,
uid_lookup_path=self.Model_Save_Path + self.uid_lookup_path)
current_counter = 1
for (tensor_image, image) in self.tensor_files:
print 'On ' + str(current_counter)
predictions = sess.run(softmax_tensor, {'DecodeJpeg/contents:0': tensor_image})
predictions = np.squeeze(predictions)
top_k = predictions.argsort()[-int(self.filter_level):][::-1]
for node_id in top_k:
human_string = node_lookup.id_to_string(node_id)
score = predictions[node_id]
当我尝试 运行 Google 的 Inception 模型循环遍历图像列表时,我在大约 100 张图像后遇到以下问题。似乎 运行ning 内存不足。我 运行 宁 CPU。还有其他人遇到过这个问题吗?
Traceback (most recent call last):
File "clean_dataset.py", line 33, in <module>
description, score = inception.run_inference_on_image(f.read())
File "/Volumes/EXPANSION/research/dcgan-transfer/data/classify_image.py", line 178, in run_inference_on_image
node_lookup = NodeLookup()
File "/Volumes/EXPANSION/research/dcgan-transfer/data/classify_image.py", line 83, in __init__
self.node_lookup = self.load(label_lookup_path, uid_lookup_path)
File "/Volumes/EXPANSION/research/dcgan-transfer/data/classify_image.py", line 112, in load
proto_as_ascii = tf.gfile.GFile(label_lookup_path).readlines()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 110, in readlines
self._prereadline_check()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 72, in _prereadline_check
compat.as_bytes(self.__name), 1024 * 512, status)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/errors.py", line 463, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors.ResourceExhaustedError: /tmp/imagenet/imagenet_2012_challenge_label_map_proto.pbtxt
real 6m32.403s
user 7m8.210s
sys 1m36.114s
问题是您不能简单地在自己的代码中导入原始 'classify_image.py'(https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify_image.py),尤其是当您将其放入一个巨大的循环中以 class 化数千张图像时'in batch mode'。
原代码看这里:
with tf.Session() as sess:
# Some useful tensors:
# 'softmax:0': A tensor containing the normalized prediction across
# 1000 labels.
# 'pool_3:0': A tensor containing the next-to-last layer containing 2048
# float description of the image.
# 'DecodeJpeg/contents:0': A tensor containing a string providing JPEG
# encoding of the image.
# Runs the softmax tensor by feeding the image_data as input to the graph.
softmax_tensor = sess.graph.get_tensor_by_name('softmax:0')
predictions = sess.run(softmax_tensor,
{'DecodeJpeg/contents:0': image_data})
predictions = np.squeeze(predictions)
# Creates node ID --> English string lookup.
node_lookup = NodeLookup()
top_k = predictions.argsort()[-FLAGS.num_top_predictions:][::-1]
for node_id in top_k:
human_string = node_lookup.id_to_string(node_id)
score = predictions[node_id]
print('%s (score = %.5f)' % (human_string, score))
从上面你可以看到,对于每个 classification 任务,它都会生成一个 Class 'NodeLookup' 的新实例,它从下面的文件中加载:
- label_lookup="imagenet_2012_challenge_label_map_proto.pbtxt"
- uid_lookup_path="imagenet_synset_to_human_label_map.txt"
所以这个实例会非常庞大,然后在你的代码循环中它会生成数百个这个 class 的实例,结果是 'tensorflow.python.framework.errors.ResourceExhaustedError'。
我的建议是编写一个新脚本并修改 'classify_image.py' 中的那些 classes 和函数,并避免实例化 NodeLookup class每个循环,只需实例化一次并在循环中使用它。像这样:
with tf.Session() as sess:
softmax_tensor = sess.graph.get_tensor_by_name('softmax:0')
print 'Making classifications:'
# Creates node ID --> English string lookup.
node_lookup = NodeLookup(label_lookup_path=self.Model_Save_Path + self.label_lookup,
uid_lookup_path=self.Model_Save_Path + self.uid_lookup_path)
current_counter = 1
for (tensor_image, image) in self.tensor_files:
print 'On ' + str(current_counter)
predictions = sess.run(softmax_tensor, {'DecodeJpeg/contents:0': tensor_image})
predictions = np.squeeze(predictions)
top_k = predictions.argsort()[-int(self.filter_level):][::-1]
for node_id in top_k:
human_string = node_lookup.id_to_string(node_id)
score = predictions[node_id]