Intel NCS2 vpu 不支持动态批处理

Dynamic batch is not supported on Intel NCS2 vpu

我正在尝试 运行 FP16 person-detection-retail-0013 和 person-reidentification-retail-0079 在 Intel Neural Compute Stick 硬件上,但是一旦我 运行 应用程序加载设备上的网络我得到这个异常:

[INFERENCE ENGINE EXCEPTION] Dynamic batch is not supported

我已经将最大批量大小设置为 1 来加载网络,并且我已经从行人跟踪器演示开始我的项目到 OpenVINO 工具包:

main.cpp --> 创建行人追踪器

    CnnConfig reid_config(reid_model, reid_weights);
    reid_config.max_batch_size = 16;

    try {
        if (ie.GetConfig(deviceName, CONFIG_KEY(DYN_BATCH_ENABLED)).as<std::string>() != 
            PluginConfigParams::YES) {
            reid_config.max_batch_size = 1;
            std::cerr << "[DEBUG] Dynamic batch is not supported for " << deviceName << ". Fall back 
            to batch 1." << std::endl;
        }
    }
    catch (const InferenceEngine::details::InferenceEngineException& e) {
        reid_config.max_batch_size = 1;
        std::cerr << e.what() << " for " << deviceName << ". Fall back to batch 1." << std::endl;
    }

Cnn.cpp --> void CnnBase::InferBatch

void CnnBase::InferBatch(
const std::vector<cv::Mat>& frames,
std::function<void(const InferenceEngine::BlobMap&, size_t)> fetch_results) const {
const size_t batch_size = input_blob_->getTensorDesc().getDims()[0];

size_t num_imgs = frames.size();
for (size_t batch_i = 0; batch_i < num_imgs; batch_i += batch_size) {

    const size_t current_batch_size = std::min(batch_size, num_imgs - batch_i);

    for (size_t b = 0; b < current_batch_size; b++) {
        matU8ToBlob<uint8_t>(frames[batch_i + b], input_blob_, b); 
    }

    if ((deviceName_.find("MYRIAD") == std::string::npos) && (deviceName_.find("HDDL") == 
        std::string::npos)) {
        infer_request_.SetBatch(current_batch_size); 
    }

    infer_request_.Infer();

    fetch_results(outputs_, current_batch_size);
 }
}

估计是检测网的拓扑问题,请问有没有人遇到同样的问题并解决了
谢谢。

恐怕myriad插件不支持动态批处理。请尝试更新版本的演示。例如,您可以在此处找到它:https://github.com/opencv/open_model_zoo/tree/master/demos/pedestrian_tracker_demo 更新demo,完全不使用动态批处理