Object Detection (coco-ssd) Node.js: Error: pixels passed to tf.browser.fromPixels() must be either an HTMLVideoElement

Object Detection (coco-ssd) Node.js: Error: pixels passed to tf.browser.fromPixels() must be either an HTMLVideoElement

我在 node.js tensorflow-models/coco-ssd 上使用我的 iobroker'。我必须如何加载图像?

当我这样做时,出现错误: 错误:传递给 tf.browser.fromPixels() 的像素必须是 HTMLVideoElement、HTMLImageElement、HTMLCanvasElement、浏览器中的 ImageData 或 OffscreenCanvas,

这是我的代码:

const cocoSsd = require('@tensorflow-models/coco-ssd');
init();
function init() {
    (async () => {

        // Load the model.
        const model = await cocoSsd.load();
        
        // Classify the image.
        var image = fs.readFileSync('/home/iobroker/12-14-2020-tout.jpg');
    
        // Classify the image.
        const predictions = await model.detect(image);
    
        console.log('Predictions: ');
        console.log(predictions);

    })();
}

您在这种情况下看到的错误消息是准确的。

首先,在这一部分中,您将使用文件字符串/缓冲区实例初始化 image

        // Classify the image.
        var image = fs.readFileSync('/home/iobroker/12-14-2020-tout.jpg');

然后,您将它传递给 model.detect():

        // Classify the image.
        const predictions = await model.detect(image);

问题是 model.detect() 实际上需要一个 HTML image/video/canvas 元素。根据 @tensorflow-models/coco-ssd 对象检测文档:

It can take input as any browser-based image elements (<img>, <video>, <canvas> elements, for example) and returns an array of bounding boxes with class name and confidence level.

它不能在 Node 服务器 env 上运行,如同一文档所述:

Note: The following shows how to use coco-ssd npm to transpile for web deployment, not an example on how to use coco-ssd in the node env.

但是,您可以按照步骤 like the ones of this guide 进行操作,该步骤展示了如何在节点服务器上实现 运行 它的目标。

示例如下:

 const cocoSsd = require('@tensorflow-models/coco-ssd');
 const tf = require('@tensorflow/tfjs-node');
 const fs = require('fs').promises;

 // Load the Coco SSD model and image.
 Promise.all([cocoSsd.load(), fs.readFile('/home/iobroker/12-14-2020-tout.jpg')])
 .then((results) => {
   // First result is the COCO-SSD model object.
   const model = results[0];
   // Second result is image buffer.
   const imgTensor = tf.node.decodeImage(new Uint8Array(results[1]), 3);
   // Call detect() to run inference.
   return model.detect(imgTensor);
 })
 .then((predictions) => {
   console.log(JSON.stringify(predictions, null, 2));
 });