Gstreamer 流不适用于 OpenCV
Gstreamer stream is not working with OpenCV
我想直接将 Gstreamer 管道与 OpenCV 结合使用来管理从相机获取的图像。目前我没有相机,所以我一直在尝试从 URI 和本地文件中获取视频。我正在使用带有 L4T (ubuntu 18.04) 的 Jetson AGX Xavier,我的 OpenCV 构建包括 Gstreamer,并且这两个库似乎都可以独立工作。
我遇到的问题是,当我使用 cv2.CAP_GSTREAMER 将定义管道的字符串传递给 VideoCapture class 时,我收到了如下警告:
[ WARN:0] global /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp (854) open OpenCV | GStreamer warning: Error opening bin: could not link playbin0 to whatever sink I've defined
[ WARN:0] global /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp (597) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
我已经尝试了几个选项,您可以在下一个代码中看到它们:
bool receiver(const char* context)
{
VideoCapture cap(context, CAP_GSTREAMER);
int fail = 0;
while(!cap.isOpened())
{
cout<<"VideoCapture not opened"<<endl;
fail ++;
if (fail > 10){
return false;
}
continue;
}
Mat frame;
while(true) {
cap.read(frame);
if(frame.empty())
return true;
imshow("Receiver", frame);
if(waitKey(1) == 'r')
return false;
}
destroyWindow("Receiver");
return true;
}
int main(int argc, char *argv[])
{
GstElement *pipeline;
const char* context = "gstlaunch v udpsrc port=5000 caps=\"application/xrtp\" ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! ximagesink sync=false"; //Command for the camera that I don't have yet
const char* test_context = "gstlaunch playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm";
const char* thermal_context = "playbin uri=file:///home/nvidia/repos/vidtest/thermalVideo.avi ! appsink name=thermalsink";
const char* local_context = "playbin uri=file:///home/nvidia/repos/flir/Video.avi";
// gst_init(&argc, &argv);
// pipeline = gst_parse_launch(test_context, NULL);
bool correct_execution = receiver(thermal_context);
if(correct_execution){
cout << "openCV - gstreamer works!" << endl;
} else {
cout << "openCV - gstreamer FAILED" << endl;
}
}
对于我测试过的命令,错误 isPipelinePlaying OpenCV | GStreamer 警告:GStreamer:尚未创建管道 是持久性的,如果我没有定义 AppSink,上面显示的错误将更改为 open OpenCV | GStreamer 警告:无法在手动管道中找到 appsink。
从警告中我可以了解到管道不完整或未正确创建,但我不知道为什么,我已经按照我在网上找到的示例进行操作,它们不包含任何其他步骤。
此外,当直接使用 Gstreamer 管道可视化流时,当我尝试打开本地视频时,一切似乎都正常,但第一帧被冻结并且不显示视频,它只是停留在第一帧。你知道为什么会这样吗?将 playbin uri 指向一个互联网地址,一切正常......代码是下一个:
#include <gst/gst.h>
#include <unistd.h> // for sleep function
#include <iostream>
using namespace std;
int main (int argc, char *argv[])
{
GstElement *pipeline;
GstBus *bus;
GstMessage *msg;
const char* context = "gstlaunch v udpsrc port=5000 caps=\"application/xrtp\" ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! ximagesink sync=false";
const char* local_context = "gst-launch-1.0 -v playbin uri=file:///home/nvidia/repos/APPIDE/vidtest/THERMAL/thermalVideo.avi";
const char* test_context = "gstlaunch playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm";
// Initialize gstreamer
gst_init (&argc, &argv);
// Create C pipeline from terminal command (context)
pipeline = gst_parse_launch(local_context, NULL);
// Start the pipeline
gst_element_set_state(pipeline, GST_STATE_PLAYING);
// Wait until error or EOS
bus = gst_element_get_bus (pipeline);
gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, (GstMessageType)(GST_MESSAGE_ERROR | GST_MESSAGE_EOS));
/* Free resources */
if (msg != NULL)
gst_message_unref (msg);
// g_print(msg);
gst_object_unref (bus);
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
}
对于使用 gstreamer 后端,opencv VideoCapture 需要从源到 appsink 的有效管道字符串(BGR 颜色格式)。
您的管道字符串不正确,主要是因为它们以您将在 shell 中用于 运行 的二进制命令(gst-launch-1.0 的 gstlaunch,playbin)开头。
您可以尝试使用此管道从 RTP/UDP 读取 H264 编码的视频,使用专用 HW NVDEC 解码,然后从 NVMM 内存复制到系统内存,同时转换为 BGRx 格式,然后使用 CPU 基于 videoconvert 的 BGR 格式,如 opencv appsink 所期望的那样:
const char* context = "udpsrc port=5000 caps=application/x-rtp,media=video,encoding-name=H264 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1";
或者对于 uridecodebin,如果选择了 NV 解码器,则输出可能在 NVMM 内存中,否则在系统内存中,因此以下 nvvidconv 实例首先复制到 NVMM 内存,然后第二个 nvvidconv 使用 HW 转换为 BGRx并输出到系统内存中:
const char* local_context = "uridecodebin uri=file:///home/nvidia/repos/APPIDE/vidtest/THERMAL/thermalVideo.avi ! nvvidconv ! video/x-raw(memory:NVMM) ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1";
注意高分辨率:
- CPU-based videoconvert 可能是一个瓶颈。启用所有内核并提升时钟。
- OpenCv imshow 可能不会那么快,具体取决于您的 OpenCv 构建的图形后端(GTK、QT4、QT5..)。在这种情况下,一种解决方案是使用使用 gstreamer 后端的 OpenCv Videowriter 输出到 gstreamer 视频接收器。
我想直接将 Gstreamer 管道与 OpenCV 结合使用来管理从相机获取的图像。目前我没有相机,所以我一直在尝试从 URI 和本地文件中获取视频。我正在使用带有 L4T (ubuntu 18.04) 的 Jetson AGX Xavier,我的 OpenCV 构建包括 Gstreamer,并且这两个库似乎都可以独立工作。
我遇到的问题是,当我使用 cv2.CAP_GSTREAMER 将定义管道的字符串传递给 VideoCapture class 时,我收到了如下警告:
[ WARN:0] global /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp (854) open OpenCV | GStreamer warning: Error opening bin: could not link playbin0 to whatever sink I've defined
[ WARN:0] global /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp (597) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
我已经尝试了几个选项,您可以在下一个代码中看到它们:
bool receiver(const char* context)
{
VideoCapture cap(context, CAP_GSTREAMER);
int fail = 0;
while(!cap.isOpened())
{
cout<<"VideoCapture not opened"<<endl;
fail ++;
if (fail > 10){
return false;
}
continue;
}
Mat frame;
while(true) {
cap.read(frame);
if(frame.empty())
return true;
imshow("Receiver", frame);
if(waitKey(1) == 'r')
return false;
}
destroyWindow("Receiver");
return true;
}
int main(int argc, char *argv[])
{
GstElement *pipeline;
const char* context = "gstlaunch v udpsrc port=5000 caps=\"application/xrtp\" ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! ximagesink sync=false"; //Command for the camera that I don't have yet
const char* test_context = "gstlaunch playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm";
const char* thermal_context = "playbin uri=file:///home/nvidia/repos/vidtest/thermalVideo.avi ! appsink name=thermalsink";
const char* local_context = "playbin uri=file:///home/nvidia/repos/flir/Video.avi";
// gst_init(&argc, &argv);
// pipeline = gst_parse_launch(test_context, NULL);
bool correct_execution = receiver(thermal_context);
if(correct_execution){
cout << "openCV - gstreamer works!" << endl;
} else {
cout << "openCV - gstreamer FAILED" << endl;
}
}
对于我测试过的命令,错误 isPipelinePlaying OpenCV | GStreamer 警告:GStreamer:尚未创建管道 是持久性的,如果我没有定义 AppSink,上面显示的错误将更改为 open OpenCV | GStreamer 警告:无法在手动管道中找到 appsink。 从警告中我可以了解到管道不完整或未正确创建,但我不知道为什么,我已经按照我在网上找到的示例进行操作,它们不包含任何其他步骤。
此外,当直接使用 Gstreamer 管道可视化流时,当我尝试打开本地视频时,一切似乎都正常,但第一帧被冻结并且不显示视频,它只是停留在第一帧。你知道为什么会这样吗?将 playbin uri 指向一个互联网地址,一切正常......代码是下一个:
#include <gst/gst.h>
#include <unistd.h> // for sleep function
#include <iostream>
using namespace std;
int main (int argc, char *argv[])
{
GstElement *pipeline;
GstBus *bus;
GstMessage *msg;
const char* context = "gstlaunch v udpsrc port=5000 caps=\"application/xrtp\" ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! ximagesink sync=false";
const char* local_context = "gst-launch-1.0 -v playbin uri=file:///home/nvidia/repos/APPIDE/vidtest/THERMAL/thermalVideo.avi";
const char* test_context = "gstlaunch playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm";
// Initialize gstreamer
gst_init (&argc, &argv);
// Create C pipeline from terminal command (context)
pipeline = gst_parse_launch(local_context, NULL);
// Start the pipeline
gst_element_set_state(pipeline, GST_STATE_PLAYING);
// Wait until error or EOS
bus = gst_element_get_bus (pipeline);
gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, (GstMessageType)(GST_MESSAGE_ERROR | GST_MESSAGE_EOS));
/* Free resources */
if (msg != NULL)
gst_message_unref (msg);
// g_print(msg);
gst_object_unref (bus);
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
}
对于使用 gstreamer 后端,opencv VideoCapture 需要从源到 appsink 的有效管道字符串(BGR 颜色格式)。
您的管道字符串不正确,主要是因为它们以您将在 shell 中用于 运行 的二进制命令(gst-launch-1.0 的 gstlaunch,playbin)开头。
您可以尝试使用此管道从 RTP/UDP 读取 H264 编码的视频,使用专用 HW NVDEC 解码,然后从 NVMM 内存复制到系统内存,同时转换为 BGRx 格式,然后使用 CPU 基于 videoconvert 的 BGR 格式,如 opencv appsink 所期望的那样:
const char* context = "udpsrc port=5000 caps=application/x-rtp,media=video,encoding-name=H264 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1";
或者对于 uridecodebin,如果选择了 NV 解码器,则输出可能在 NVMM 内存中,否则在系统内存中,因此以下 nvvidconv 实例首先复制到 NVMM 内存,然后第二个 nvvidconv 使用 HW 转换为 BGRx并输出到系统内存中:
const char* local_context = "uridecodebin uri=file:///home/nvidia/repos/APPIDE/vidtest/THERMAL/thermalVideo.avi ! nvvidconv ! video/x-raw(memory:NVMM) ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1";
注意高分辨率:
- CPU-based videoconvert 可能是一个瓶颈。启用所有内核并提升时钟。
- OpenCv imshow 可能不会那么快,具体取决于您的 OpenCv 构建的图形后端(GTK、QT4、QT5..)。在这种情况下,一种解决方案是使用使用 gstreamer 后端的 OpenCv Videowriter 输出到 gstreamer 视频接收器。