GStreamer RTSP 客户端在 Jetson TX2 中逐渐增加内存使用

GStreamer RTSP Client increase memory usage gradually in Jetson TX2

我测试了很久的程序,终于找到了。 GStreamer使用的内存上升很慢

如果RTSP IP Camera是输入源,GStreamer内部是不是使用了缓冲,那么内存是不是上升到一定的水平就一直保持不变?

是否不需要更改 RTSP 相关设置以使用一定大小的缓冲区?

#include <gst/gst.h>
#include <gst/gstinfo.h>
#include <gst/app/gstappsink.h>
#include <gst/allocators/gstdmabuf.h>
#include <gst/app/gstappsrc.h>

#include <glib.h>
#include <termios.h> 
#include <iostream>
#include <cstdlib>

#include <opencv2/opencv.hpp>
using namespace cv;


GstElement *pipeline;


int _getch() 
{ 
    int ch; 
    struct termios old; 
    struct termios current; 
    tcgetattr(0, &old); 

    current = old; 
    current.c_lflag &= ~ICANON; 
    current.c_lflag &= ~ECHO; 

    tcsetattr(0, TCSANOW, &current); 
    ch = getchar(); 
    tcsetattr(0, TCSANOW, &old); 

    return ch; 
}


GstFlowReturn new_sample0(GstAppSink *appsink, gpointer data) 
{

    GstSample *sample = gst_app_sink_pull_sample(GST_APP_SINK(appsink));

    if (sample != NULL)
    {
        GstCaps *caps = gst_sample_get_caps(sample);
        GstBuffer *buffer = gst_sample_get_buffer(sample);

        int width, height;
        GstStructure* structure = gst_caps_get_structure (caps, 0);
        gst_structure_get_int(structure, "height", &height);
        gst_structure_get_int(structure, "width", &width);
        int size = gst_buffer_get_size(buffer);

        printf("%d %d %d\n", width, height, size);

        GstMapInfo map;
        gst_buffer_map (buffer, &map, GST_MAP_READ);

        
        uint8_t *nv12DataBuffer = new uint8_t[size];
        memcpy(nv12DataBuffer, (uint8_t*)map.data, size);
      
        Mat img = Mat(Size(width, height*3/2), CV_8UC1, nv12DataBuffer);
        
        Mat img2;
        cvtColor(img, img2, COLOR_YUV2BGR_NV12 );

        imshow("window", img2);
        waitKey(1);

        delete[] nv12DataBuffer;

        gst_buffer_unmap(buffer, &map);
        gst_sample_unref (sample);

    }

    return GST_FLOW_OK;
}



static void on_pad_added (GstElement *element, GstPad *pad, gpointer data)
{
    GstElement *source=gst_bin_get_by_name (GST_BIN(pipeline), "rtsp-source");
    GstElement *depay=gst_bin_get_by_name (GST_BIN(pipeline), "depay");

    gst_element_link(source, depay);

    gst_object_unref (GST_OBJECT (source));
    gst_object_unref (GST_OBJECT (depay));
}


int main ()
{
    GstElement *source, *depay, *parse, *decoder,  *filter1, *conv, *filter2, *sink;

    gst_init (NULL, NULL);


    pipeline = gst_pipeline_new ("player");
    source   = gst_element_factory_make ("rtspsrc", "rtsp-source");
    depay    = gst_element_factory_make ("rtph264depay", "depay");
    parse    = gst_element_factory_make ("h264parse", "parser");
    decoder  = gst_element_factory_make ("nvv4l2decoder", "decoder"); 
    conv = gst_element_factory_make("nvvidconv", "conv"); 
    sink     = gst_element_factory_make ("appsink", "sink");
    filter1 = gst_element_factory_make ("capsfilter", "video_filter1");
    filter2 = gst_element_factory_make ("capsfilter", "video_filter2");

    if (!pipeline || !source || !depay || !parse || !decoder || !filter1 ||!conv || !filter2 || !sink ) {
    printf("One element could not be created. Exiting.\n");
    return 1;
    }


    GstCaps    *caps1, *caps2;
    caps1 = gst_caps_from_string ("video/x-raw(memory:NVMM)"); 
    caps2 = gst_caps_from_string ("video/x-raw"); 
    g_object_set (G_OBJECT (filter1), "caps", caps1, NULL);
    g_object_set (G_OBJECT (filter2), "caps", caps2, NULL);
    gst_caps_unref (caps1);
    gst_caps_unref (caps2);


    g_object_set (G_OBJECT (source), "location", "rtsp ip address", NULL);

    g_object_set(G_OBJECT(sink), "emit-signals", true, NULL);
    g_object_set(G_OBJECT(sink), "async", false, "sync", false, "max-lateness", 0, NULL);
    g_object_set(G_OBJECT (source), "latency", 0, NULL);

    gst_bin_add_many (GST_BIN (pipeline), source, depay, parse, decoder, filter1, conv, filter2, sink, NULL);
    gst_element_link_many (depay, parse, decoder, filter1, conv, filter2, sink, NULL);


    g_signal_connect (source, "pad-added", G_CALLBACK (on_pad_added), NULL);
    if ( g_signal_connect(sink, "new-sample", G_CALLBACK(new_sample0), NULL) <= 0 )
    {
        std::cout << "Connects a GCallback function to a signal new-sample" << std::endl;
        return 1;
    }


    printf ("Now playing\n");
    
    gst_element_set_state (pipeline, GST_STATE_PLAYING);

    printf ("Running...\n");

    while(1)
    {
        int key = _getch();

        if (key=='q') break;
    }

    printf ("Returned, stopping playback\n");
    gst_element_set_state (pipeline, GST_STATE_NULL);

    printf ("Deleting pipeline\n");
    gst_object_unref (GST_OBJECT (pipeline));
    gst_deinit();

    return 0;
}

does GStreamer use buffering internally, so does the memory rise to a certain level and keep it constant?

是也不是。一些 GStreamer 元素 可以 使用缓冲(可能达到某个阈值),但不一定。例如,rtpjitterbuffer 缓冲数据包,因为它的部分功能是解决数据包重新排序问题。其他一些元素,如 queue 元素,可以显式地用于进行缓冲(例如,它允许您在其源和接收器垫上分离流线程)。其他元素不需要,因为没有明确的需要。

我可以看到您使用了 appsink 元素,它也隐式缓冲传入数据:这是必要的,以确保想要推送新缓冲区的管道在“ new-sample" 前一个缓冲区的回调仍然很忙。不过可以配置此行为。

查看您的代码,我在回调中看到 waitKey(1),这意味着您的线程将被阻塞整整一秒,然后才能继续下一帧。因此,如果帧在那一秒内到达(例如 30Hz 流就是这种情况),它将由 appsink 排队。请注意,“max-lateness”属性 值将被忽略,因为您将“sync”属性 设置为 false。

根据您想如何处理,您有多种选择:

  • 设置appsink的“max-buffers”属性,使其只缓冲到一定的阈值,并将“drop”属性设置为true,这样帧就会被丢弃.
  • 不要在回调中使用阻塞函数。例如,您可以将新到达的样本排队,您在不同的线程中对其进行处理。当然请注意,如果您刚开始在该线程中排队,您将再次遇到同样的问题。
  • 使用opencv以外的东西,因为你在这里只用它来做渲染和颜色转换,这两者也可以在GStreamer中完成(当然这取决于你的实际用例)。