使用 ffmpeg 命令推送 rtsp 流,它不包含 SPS 和 PPS 帧
use ffmpeg command to push rtsp stream, it doesn't contain SPS and PPS frame
我使用 python 和 opencv-python 从视频中捕获帧,然后使用 ffmpeg 命令通过管道推送 rtsp 流。我可以通过 gstreamer 和 vlc 播放 rtsp 流。但是,显示设备无法解码和播放 rtsp 流,因为它无法接收 SPS 和 PPS 帧。使用wireshark抓流,发现不发送sps和pps帧,只发送IDR
帧。
关键代码如下
# ffmpeg command
command = ['ffmpeg',
'-re',
'-y',
'-f', 'rawvideo',
'-vcodec', 'rawvideo',
'-pix_fmt', 'bgr24',
'-s', "{}x{}".format(width, height),
'-r', str(fps),
'-i', '-',
'-c:v', 'libx264',
'-preset', 'ultrafast',
'-f', 'rtsp',
'-flags', 'local_headers',
'-rtsp_transport', 'tcp',
'-muxdelay', '0.1',
rtsp_url]
p = sp.Popen(command, stdin=sp.PIPE)
while (cap.isOpened()):
ret, frame = cap.read()
if not ret:
cap = cv2.VideoCapture(video_path)
continue
p.stdin.write(frame.tobytes()
可能是我错过了 ffmpeg 命令的一些选项?
尝试添加参数 '-bsf:v', 'dump_extra'
。
根据FFmpeg Bitstream Filters Documentation:
dump_extra
Add extradata to the beginning of the filtered packets except when said packets already exactly begin with the extradata that is intended to be added.
过滤器应该为每个关键帧添加 SPS 和 PPS NAL 单元。
这是一个完整的代码示例:
import subprocess as sp
import cv2
rtsp_url = 'rtsp://localhost:31415/live.stream'
video_path = 'input.mp4'
# We have to start the server up first, before the sending client (when using TCP). See: https://trac.ffmpeg.org/wiki/StreamingGuide#Pointtopointstreaming
ffplay_process = sp.Popen(['ffplay', '-rtsp_flags', 'listen', rtsp_url]) # Use FFplay sub-process for receiving the RTSP video.
cap = cv2.VideoCapture(video_path)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) # Get video frames width
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) # Get video frames height
fps = int(cap.get(cv2.CAP_PROP_FPS)) # Get video framerate
# FFmpeg command
command = ['ffmpeg',
'-re',
'-y',
'-f', 'rawvideo',
'-vcodec', 'rawvideo',
'-pix_fmt', 'bgr24',
'-s', "{}x{}".format(width, height),
'-r', str(fps),
'-i', '-',
'-c:v', 'libx264',
'-preset', 'ultrafast',
'-f', 'rtsp',
#'-flags', 'local_headers',
'-rtsp_transport', 'tcp',
'-muxdelay', '0.1',
'-bsf:v', 'dump_extra',
rtsp_url]
p = sp.Popen(command, stdin=sp.PIPE)
while (cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
p.stdin.write(frame.tobytes())
p.stdin.close() # Close stdin pipe
p.wait() # Wait for FFmpeg sub-process to finish
ffplay_process.kill() # Forcefully close FFplay sub-process
备注:
'-flags', 'local_headers'
在我的 FFmpeg 版本中不是有效参数。
- 我不知道如何验证我的解决方案,所以我可能是错的...
我使用 python 和 opencv-python 从视频中捕获帧,然后使用 ffmpeg 命令通过管道推送 rtsp 流。我可以通过 gstreamer 和 vlc 播放 rtsp 流。但是,显示设备无法解码和播放 rtsp 流,因为它无法接收 SPS 和 PPS 帧。使用wireshark抓流,发现不发送sps和pps帧,只发送IDR 帧。
关键代码如下
# ffmpeg command
command = ['ffmpeg',
'-re',
'-y',
'-f', 'rawvideo',
'-vcodec', 'rawvideo',
'-pix_fmt', 'bgr24',
'-s', "{}x{}".format(width, height),
'-r', str(fps),
'-i', '-',
'-c:v', 'libx264',
'-preset', 'ultrafast',
'-f', 'rtsp',
'-flags', 'local_headers',
'-rtsp_transport', 'tcp',
'-muxdelay', '0.1',
rtsp_url]
p = sp.Popen(command, stdin=sp.PIPE)
while (cap.isOpened()):
ret, frame = cap.read()
if not ret:
cap = cv2.VideoCapture(video_path)
continue
p.stdin.write(frame.tobytes()
可能是我错过了 ffmpeg 命令的一些选项?
尝试添加参数 '-bsf:v', 'dump_extra'
。
根据FFmpeg Bitstream Filters Documentation:
dump_extra
Add extradata to the beginning of the filtered packets except when said packets already exactly begin with the extradata that is intended to be added.
过滤器应该为每个关键帧添加 SPS 和 PPS NAL 单元。
这是一个完整的代码示例:
import subprocess as sp
import cv2
rtsp_url = 'rtsp://localhost:31415/live.stream'
video_path = 'input.mp4'
# We have to start the server up first, before the sending client (when using TCP). See: https://trac.ffmpeg.org/wiki/StreamingGuide#Pointtopointstreaming
ffplay_process = sp.Popen(['ffplay', '-rtsp_flags', 'listen', rtsp_url]) # Use FFplay sub-process for receiving the RTSP video.
cap = cv2.VideoCapture(video_path)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) # Get video frames width
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) # Get video frames height
fps = int(cap.get(cv2.CAP_PROP_FPS)) # Get video framerate
# FFmpeg command
command = ['ffmpeg',
'-re',
'-y',
'-f', 'rawvideo',
'-vcodec', 'rawvideo',
'-pix_fmt', 'bgr24',
'-s', "{}x{}".format(width, height),
'-r', str(fps),
'-i', '-',
'-c:v', 'libx264',
'-preset', 'ultrafast',
'-f', 'rtsp',
#'-flags', 'local_headers',
'-rtsp_transport', 'tcp',
'-muxdelay', '0.1',
'-bsf:v', 'dump_extra',
rtsp_url]
p = sp.Popen(command, stdin=sp.PIPE)
while (cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
p.stdin.write(frame.tobytes())
p.stdin.close() # Close stdin pipe
p.wait() # Wait for FFmpeg sub-process to finish
ffplay_process.kill() # Forcefully close FFplay sub-process
备注:
'-flags', 'local_headers'
在我的 FFmpeg 版本中不是有效参数。- 我不知道如何验证我的解决方案,所以我可能是错的...