如何在带有 Coral 的 Raspberry Pi 上使用带有 2 类 的自定义 TF.lite 模型?
How to use a custom TF.lite model with 2 classes on a Rasperry Pi with a Coral?
两天前,我根据图像数据集在 Tflite 中创建了一个自定义模型。准确率为 97.4% 并且只有 2 类 (person, flower)
我将模型转换为在带有 TPU Google Coral 的 Rasberry Pi 中使用它。
目前,我遇到了一些问题。 Google Coral 的文档不适合我。
语言:Python3
图书馆
- 凯拉斯
- 张量流
- 枕头
- 摄影机
- 麻木
- EdgeTPU 引擎
项目树:
-------->模型(子文件夹)
------------>model.tflite
------------>labels.txt
-------->video_detection.py
这是Python代码:(实际上代码来自文档)
import argparse
import io
import time
import numpy as np
import picamera
import edgetpu.classification.engine
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
'--model', help='File path of Tflite model.', required=True)
parser.add_argument(
'--label', help='File path of label file.', required=True)
args = parser.parse_args()
with open(args.label, 'r', encoding="utf-8") as f:
pairs = (l.strip().split(maxsplit=2) for l in f.readlines())
labels = dict((int(k), v) for k, v in pairs)
engine = edgetpu.classification.engine.ClassificationEngine(args.model)
with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.framerate = 30
_, width, height, channels = engine.get_input_tensor_shape()
camera.start_preview()
try:
stream = io.BytesIO()
for foo in camera.capture_continuous(stream,
format='rgb',
use_video_port=True,
resize=(width, height)):
stream.truncate()
stream.seek(0)
input = np.frombuffer(stream.getvalue(), dtype=np.uint8)
start_ms = time.time()
results = engine.ClassifyWithInputTensor(input, top_k=1)
elapsed_ms = time.time() - start_ms
if results:
camera.annotate_text = "%s %.2f\n%.2fms" % (
labels[results[0][0]], results[0][1], elapsed_ms*1000.0)
finally:
camera.stop_preview()
if __name__ == '__main__':
main()
如何运行脚本
python3 video_detection.py --model model/model.tflite --label model/labels.txt
错误
`Traceback (most recent call last):
File "video_detection.py", line 41, in <module>
main()
File "video_detection.py", line 16, in main
labels = dict((int(k), v) for k, v in pairs)
File "video_detection.py", line 16, in <genexpr>
labels = dict((int(k), v) for k, v in pairs)
ValueError: not enough values to unpack (expected 2, got 1)`
对我来说,现在很难集成自定义模型并将其与珊瑚一起使用。
文档:
感谢阅读,此致
E.
错误在 labels.txt 文件中:
labels = dict((int(k), v) for k, v in pairs)
ValueError: not enough values to unpack (expected 2, got 1)`
看起来有些行只有一个值而不是两个
两天前,我根据图像数据集在 Tflite 中创建了一个自定义模型。准确率为 97.4% 并且只有 2 类 (person, flower)
我将模型转换为在带有 TPU Google Coral 的 Rasberry Pi 中使用它。
目前,我遇到了一些问题。 Google Coral 的文档不适合我。
语言:Python3
图书馆
- 凯拉斯
- 张量流
- 枕头
- 摄影机
- 麻木
- EdgeTPU 引擎
项目树:
-------->模型(子文件夹)
------------>model.tflite
------------>labels.txt
-------->video_detection.py
这是Python代码:(实际上代码来自文档)
import argparse
import io
import time
import numpy as np
import picamera
import edgetpu.classification.engine
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
'--model', help='File path of Tflite model.', required=True)
parser.add_argument(
'--label', help='File path of label file.', required=True)
args = parser.parse_args()
with open(args.label, 'r', encoding="utf-8") as f:
pairs = (l.strip().split(maxsplit=2) for l in f.readlines())
labels = dict((int(k), v) for k, v in pairs)
engine = edgetpu.classification.engine.ClassificationEngine(args.model)
with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.framerate = 30
_, width, height, channels = engine.get_input_tensor_shape()
camera.start_preview()
try:
stream = io.BytesIO()
for foo in camera.capture_continuous(stream,
format='rgb',
use_video_port=True,
resize=(width, height)):
stream.truncate()
stream.seek(0)
input = np.frombuffer(stream.getvalue(), dtype=np.uint8)
start_ms = time.time()
results = engine.ClassifyWithInputTensor(input, top_k=1)
elapsed_ms = time.time() - start_ms
if results:
camera.annotate_text = "%s %.2f\n%.2fms" % (
labels[results[0][0]], results[0][1], elapsed_ms*1000.0)
finally:
camera.stop_preview()
if __name__ == '__main__':
main()
如何运行脚本
python3 video_detection.py --model model/model.tflite --label model/labels.txt
错误
`Traceback (most recent call last):
File "video_detection.py", line 41, in <module>
main()
File "video_detection.py", line 16, in main
labels = dict((int(k), v) for k, v in pairs)
File "video_detection.py", line 16, in <genexpr>
labels = dict((int(k), v) for k, v in pairs)
ValueError: not enough values to unpack (expected 2, got 1)`
对我来说,现在很难集成自定义模型并将其与珊瑚一起使用。
文档:
感谢阅读,此致
E.
错误在 labels.txt 文件中:
labels = dict((int(k), v) for k, v in pairs)
ValueError: not enough values to unpack (expected 2, got 1)`
看起来有些行只有一个值而不是两个