Rtmp 流通过 gstreamer-1.0 appsrc 到 rtmpsink
Rtmp streaming via gstreamer-1.0 appsrc to rtmpsink
我正在尝试通过 rtmp 流式传输我的网络摄像头。我尝试通过以下管道流式传输数据:
gst-launch-1.0 -v v4l2src ! 'video/x-raw, width=640, height=480,
framerate=30/1' ! queue ! videoconvert ! omxh264enc ! h264parse !
flvmux ! rtmpsink location='rtmp://{MY_IP}/rtmp/live'
它就像一个魅力。我可以在我的网站上看到视频。
那我想先抓帧,再做一些处理。
我像以前一样通过将数据推送到 appsrc 并通过管道流式传输处理过的数据,但是出现了一些问题。
我在我的网站上看不到任何流媒体。服务器端和客户端都不会引发任何错误或警告。尽管如此,我仍然可以通过以下方式获取流媒体:
gst-launch-1.0 rtmpsrc location='rtmp://{MY_IP}/rtmp/live' ! filesink
location='rtmpsrca.flv'
有人知道吗?
这是我的网站部分和 gstreamer 管道的片段
gstreamer 管道:
void threadgst(){
App * app = &s_app;
GstCaps *srccap;
GstCaps * filtercap;
GstFlowReturn ret;
GstBus *bus;
GstElement *pipeline;
gst_init (NULL,NULL);
loop = g_main_loop_new (NULL, TRUE);
//creazione della pipeline:
pipeline = gst_pipeline_new ("gstreamer-encoder");
if( ! pipeline ) {
g_print("Error creating Pipeline, exiting...");
}
//creazione elemento appsrc:
app-> videosrc = gst_element_factory_make ("appsrc", "videosrc");
if( ! app->videosrc ) {
g_print( "Error creating source element, exiting...");
}
//creazione elemento queue:
app-> queue = gst_element_factory_make ("queue", "queue");
if( ! app->queue ) {
g_print( "Error creating queue element, exiting...");
}
app->videocoverter = gst_element_factory_make ("videoconvert", "videocoverter");
if( ! app->videocoverter ) {
g_print( "Error creating videocoverter, exiting...");
}
//creazione elemento filter:
app->filter = gst_element_factory_make ("capsfilter", "filter");
if( ! app->filter ) {
g_print( "Error creating filter, exiting...");
}
app->h264enc = gst_element_factory_make ("omxh264enc", "h264enc");
if( ! app->h264enc ) {
g_print( "Error creating omxh264enc, exiting...");
}
app->h264parse = gst_element_factory_make ("h264parse", "h264parse");
if( ! app->h264parse ) {
g_print( "Error creating h264parse, exiting...");
}
app->flvmux = gst_element_factory_make ("flvmux", "flvmux");
if( ! app->flvmux ) {
g_print( "Error creating flvmux, exiting...");
}
app->rtmpsink = gst_element_factory_make ("rtmpsink", "rtmpsink");
if( ! app->rtmpsink ) {
g_print( "Error rtmpsink flvmux, exiting...");
}
g_print ("Elements are created\n");
g_object_set (G_OBJECT (app->rtmpsink), "location" , "rtmp://192.168.3.107/rtmp/live live=1" , NULL);
g_print ("end of settings\n");
srccap = gst_caps_new_simple("video/x-raw",
"format", G_TYPE_STRING, "RGB",
"width", G_TYPE_INT, 640,
"height", G_TYPE_INT, 480,
//"width", G_TYPE_INT, 320,
//"height", G_TYPE_INT, 240,
"framerate", GST_TYPE_FRACTION, 30, 1,
//"pixel-aspect-ratio", GST_TYPE_FRACTION, 1, 1,
NULL);
filtercap = gst_caps_new_simple("video/x-raw",
"format", G_TYPE_STRING, "I420",
"width", G_TYPE_INT, 640,
"height", G_TYPE_INT, 480,
//"width", G_TYPE_INT, 320,
//"height", G_TYPE_INT, 240,
"framerate", GST_TYPE_FRACTION, 30, 1,
NULL);
gst_app_src_set_caps(GST_APP_SRC( app->videosrc), srccap);
g_object_set (G_OBJECT (app->filter), "caps", filtercap, NULL);
bus = gst_pipeline_get_bus (GST_PIPELINE ( pipeline));
g_assert(bus);
gst_bus_add_watch ( bus, (GstBusFunc) bus_call, app);
gst_bin_add_many (GST_BIN ( pipeline), app-> videosrc, app->queue, app->videocoverter,app->filter, app->h264enc, app->h264parse, app->flvmux, app->rtmpsink, NULL);
g_print ("Added all the Elements into the pipeline\n");
int ok = false;
ok = gst_element_link_many ( app-> videosrc, app->queue, app->videocoverter, app->filter,app->h264enc, app->h264parse, app->flvmux, app->rtmpsink, NULL);
if(ok)g_print ("Linked all the Elements together\n");
else g_print("*** Linking error ***\n");
g_assert(app->videosrc);
g_assert(GST_IS_APP_SRC(app->videosrc));
g_signal_connect (app->videosrc, "need-data", G_CALLBACK (start_feed), app);
g_signal_connect (app->videosrc, "enough-data", G_CALLBACK (stop_feed),app);
g_print ("Playing the video\n");
gst_element_set_state (pipeline, GST_STATE_PLAYING);
g_print ("Running...\n");
g_main_loop_run ( loop);
g_print ("Returned, stopping playback\n");
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref ( bus);
g_main_loop_unref (loop);
g_print ("Deleting pipeline\n");
}
我的网页来源
<!DOCTYPE html>
<meta content="text/html;charset=utf-8" http-equiv="Content-Type">
<meta content="utf-8" http-equiv="encoding">
<html>
<head>
<title>Live Streaming</title>
<!-- strobe -->
<script type="text/javascript" src="strobe/lib/swfobject.js"></script>
<script type="text/javascript">
var parameters = {
src: "rtmp://192.168.3.107/rtmp/live",
autoPlay: true,
controlBarAutoHide: false,
playButtonOverlay: true,
showVideoInfoOverlayOnStartUp: true,
optimizeBuffering : false,
initialBufferTime : 0.1,
expandedBufferTime : 0.1,
minContinuousPlayback : 0.1,
//poster: "images/poster.png"
};
swfobject.embedSWF(
"strobe/StrobeMediaPlayback.swf"
, "StrobeMediaPlayback"
, 1024
, 768
, "10.1.0"
, "strobe/expressInstall.swf"
, parameters
, {
allowFullScreen: "true"
}
, {
name: "StrobeMediaPlayback"
}
);
</script>
</head>
<body>
<div id="StrobeMediaPlayback"></div>
</body>
</html>
当使用 appsrc 和 appsink 时,人们通常会用缓冲区做一些事情,有时他们会获取数据并以某种方式处理它们,然后创建新的缓冲区但忘记正确地为其添加时间戳..
什么是时间戳?
它将时间信息附加到 audio/video 缓冲区。
为什么? - 每个应用程序(vlc,web ..)的同步机制在特定时间以特定速率向用户显示(呈现)video/audio(这是 PTS)..
这与帧速率(在视频中)或频率(在音频中 - 但时间戳在这里的工作方式不同 - 它不是每个通常有 4 个字节的音频样本)。
那么您的 Web 端可能发生了什么 - 它收到了缓冲区但没有此时间戳信息。所以应用程序不知道 how/when 显示视频,所以它默默地失败了,什么也不显示。
GStreamer 应用程序之所以有效,是因为它显然有一些算法可以猜测帧率等。
正如我所说,你有两个选择。
1,自己计算 PTS 和持续时间:
guint64 calculated_pts = some_cool_algorithm();
GstBuffer *buffer = gst_buffer_new(data);//your processed data
GST_BUFFER_PTS(buffer) = calculated_pts; // in nanoseconds
GST_BUFFER_DURATION(buffer) = 1234567890; // in nanoseconds
//push buffer to appsrc
2,或者为 appsrc 打开 do-timestamp
,这将自动生成时间戳 - 现在我不确定它是怎么做的 - 它要么从上限中选择帧率,要么根据你的推送方式生成 PTS框入其中。
我正在尝试通过 rtmp 流式传输我的网络摄像头。我尝试通过以下管道流式传输数据:
gst-launch-1.0 -v v4l2src ! 'video/x-raw, width=640, height=480, framerate=30/1' ! queue ! videoconvert ! omxh264enc ! h264parse ! flvmux ! rtmpsink location='rtmp://{MY_IP}/rtmp/live'
它就像一个魅力。我可以在我的网站上看到视频。
那我想先抓帧,再做一些处理。 我像以前一样通过将数据推送到 appsrc 并通过管道流式传输处理过的数据,但是出现了一些问题。
我在我的网站上看不到任何流媒体。服务器端和客户端都不会引发任何错误或警告。尽管如此,我仍然可以通过以下方式获取流媒体:
gst-launch-1.0 rtmpsrc location='rtmp://{MY_IP}/rtmp/live' ! filesink location='rtmpsrca.flv'
有人知道吗?
这是我的网站部分和 gstreamer 管道的片段
gstreamer 管道:
void threadgst(){
App * app = &s_app;
GstCaps *srccap;
GstCaps * filtercap;
GstFlowReturn ret;
GstBus *bus;
GstElement *pipeline;
gst_init (NULL,NULL);
loop = g_main_loop_new (NULL, TRUE);
//creazione della pipeline:
pipeline = gst_pipeline_new ("gstreamer-encoder");
if( ! pipeline ) {
g_print("Error creating Pipeline, exiting...");
}
//creazione elemento appsrc:
app-> videosrc = gst_element_factory_make ("appsrc", "videosrc");
if( ! app->videosrc ) {
g_print( "Error creating source element, exiting...");
}
//creazione elemento queue:
app-> queue = gst_element_factory_make ("queue", "queue");
if( ! app->queue ) {
g_print( "Error creating queue element, exiting...");
}
app->videocoverter = gst_element_factory_make ("videoconvert", "videocoverter");
if( ! app->videocoverter ) {
g_print( "Error creating videocoverter, exiting...");
}
//creazione elemento filter:
app->filter = gst_element_factory_make ("capsfilter", "filter");
if( ! app->filter ) {
g_print( "Error creating filter, exiting...");
}
app->h264enc = gst_element_factory_make ("omxh264enc", "h264enc");
if( ! app->h264enc ) {
g_print( "Error creating omxh264enc, exiting...");
}
app->h264parse = gst_element_factory_make ("h264parse", "h264parse");
if( ! app->h264parse ) {
g_print( "Error creating h264parse, exiting...");
}
app->flvmux = gst_element_factory_make ("flvmux", "flvmux");
if( ! app->flvmux ) {
g_print( "Error creating flvmux, exiting...");
}
app->rtmpsink = gst_element_factory_make ("rtmpsink", "rtmpsink");
if( ! app->rtmpsink ) {
g_print( "Error rtmpsink flvmux, exiting...");
}
g_print ("Elements are created\n");
g_object_set (G_OBJECT (app->rtmpsink), "location" , "rtmp://192.168.3.107/rtmp/live live=1" , NULL);
g_print ("end of settings\n");
srccap = gst_caps_new_simple("video/x-raw",
"format", G_TYPE_STRING, "RGB",
"width", G_TYPE_INT, 640,
"height", G_TYPE_INT, 480,
//"width", G_TYPE_INT, 320,
//"height", G_TYPE_INT, 240,
"framerate", GST_TYPE_FRACTION, 30, 1,
//"pixel-aspect-ratio", GST_TYPE_FRACTION, 1, 1,
NULL);
filtercap = gst_caps_new_simple("video/x-raw",
"format", G_TYPE_STRING, "I420",
"width", G_TYPE_INT, 640,
"height", G_TYPE_INT, 480,
//"width", G_TYPE_INT, 320,
//"height", G_TYPE_INT, 240,
"framerate", GST_TYPE_FRACTION, 30, 1,
NULL);
gst_app_src_set_caps(GST_APP_SRC( app->videosrc), srccap);
g_object_set (G_OBJECT (app->filter), "caps", filtercap, NULL);
bus = gst_pipeline_get_bus (GST_PIPELINE ( pipeline));
g_assert(bus);
gst_bus_add_watch ( bus, (GstBusFunc) bus_call, app);
gst_bin_add_many (GST_BIN ( pipeline), app-> videosrc, app->queue, app->videocoverter,app->filter, app->h264enc, app->h264parse, app->flvmux, app->rtmpsink, NULL);
g_print ("Added all the Elements into the pipeline\n");
int ok = false;
ok = gst_element_link_many ( app-> videosrc, app->queue, app->videocoverter, app->filter,app->h264enc, app->h264parse, app->flvmux, app->rtmpsink, NULL);
if(ok)g_print ("Linked all the Elements together\n");
else g_print("*** Linking error ***\n");
g_assert(app->videosrc);
g_assert(GST_IS_APP_SRC(app->videosrc));
g_signal_connect (app->videosrc, "need-data", G_CALLBACK (start_feed), app);
g_signal_connect (app->videosrc, "enough-data", G_CALLBACK (stop_feed),app);
g_print ("Playing the video\n");
gst_element_set_state (pipeline, GST_STATE_PLAYING);
g_print ("Running...\n");
g_main_loop_run ( loop);
g_print ("Returned, stopping playback\n");
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref ( bus);
g_main_loop_unref (loop);
g_print ("Deleting pipeline\n");
}
我的网页来源
<!DOCTYPE html>
<meta content="text/html;charset=utf-8" http-equiv="Content-Type">
<meta content="utf-8" http-equiv="encoding">
<html>
<head>
<title>Live Streaming</title>
<!-- strobe -->
<script type="text/javascript" src="strobe/lib/swfobject.js"></script>
<script type="text/javascript">
var parameters = {
src: "rtmp://192.168.3.107/rtmp/live",
autoPlay: true,
controlBarAutoHide: false,
playButtonOverlay: true,
showVideoInfoOverlayOnStartUp: true,
optimizeBuffering : false,
initialBufferTime : 0.1,
expandedBufferTime : 0.1,
minContinuousPlayback : 0.1,
//poster: "images/poster.png"
};
swfobject.embedSWF(
"strobe/StrobeMediaPlayback.swf"
, "StrobeMediaPlayback"
, 1024
, 768
, "10.1.0"
, "strobe/expressInstall.swf"
, parameters
, {
allowFullScreen: "true"
}
, {
name: "StrobeMediaPlayback"
}
);
</script>
</head>
<body>
<div id="StrobeMediaPlayback"></div>
</body>
</html>
当使用 appsrc 和 appsink 时,人们通常会用缓冲区做一些事情,有时他们会获取数据并以某种方式处理它们,然后创建新的缓冲区但忘记正确地为其添加时间戳..
什么是时间戳? 它将时间信息附加到 audio/video 缓冲区。 为什么? - 每个应用程序(vlc,web ..)的同步机制在特定时间以特定速率向用户显示(呈现)video/audio(这是 PTS)..
这与帧速率(在视频中)或频率(在音频中 - 但时间戳在这里的工作方式不同 - 它不是每个通常有 4 个字节的音频样本)。
那么您的 Web 端可能发生了什么 - 它收到了缓冲区但没有此时间戳信息。所以应用程序不知道 how/when 显示视频,所以它默默地失败了,什么也不显示。
GStreamer 应用程序之所以有效,是因为它显然有一些算法可以猜测帧率等。
正如我所说,你有两个选择。
1,自己计算 PTS 和持续时间:
guint64 calculated_pts = some_cool_algorithm();
GstBuffer *buffer = gst_buffer_new(data);//your processed data
GST_BUFFER_PTS(buffer) = calculated_pts; // in nanoseconds
GST_BUFFER_DURATION(buffer) = 1234567890; // in nanoseconds
//push buffer to appsrc
2,或者为 appsrc 打开 do-timestamp
,这将自动生成时间戳 - 现在我不确定它是怎么做的 - 它要么从上限中选择帧率,要么根据你的推送方式生成 PTS框入其中。