如何使用live555提取H264帧
How to extract H264 frames using live555
根本就没有完整的例子。在 live555 文件夹中,有以下程序:testRTSPClient.cpp,它访问 RTSP 并接收原始 RTP 数据包,但不对它们执行任何操作。它通过 DummySink
class.
接收它们
有个example on how to use testRTSPClient.cpp
to receive NAL units from h264, but live555 has custom sink classes specifically for each codec, so it's a lot better to use them. Example: H264or5VideoRTPSink.cpp.
因此,如果我用 testRTSPClient.cpp
中 H264or5VideoRTPSink
的子 class 实例替换 DummySink
的实例,并使该子class 接收我认为可能有用的框架。
如果我只是按照DummySink
的实现我只需要这样写:
class MyH264VideoRTPSink: public H264VideoRTPSink {
public:
static MyH264VideoRTPSink* createNew(UsageEnvironment& env,
MediaSubsession& subsession, // identifies the kind of data that's being received
char const* streamId = NULL); // identifies the stream itself (optional)
private:
MyH264VideoRTPSink(UsageEnvironment& env, MediaSubsession& subsession, char const* streamId);
// called only by "createNew()"
virtual ~MyH264VideoRTPSink();
static void afterGettingFrame(void* clientData, unsigned frameSize,
unsigned numTruncatedBytes,
struct timeval presentationTime,
unsigned durationInMicroseconds);
void afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
struct timeval presentationTime, unsigned durationInMicroseconds);
// redefined virtual functions:
virtual Boolean continuePlaying();
u_int8_t* fReceiveBuffer;
MediaSubsession& fSubsession;
char* fStreamId;
};
如果我们查看 DummySink
,它表明 afterGettingFrame
是接收帧的函数。但是帧是从哪里接收的呢?我怎样才能访问它?
void DummySink::afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
struct timeval presentationTime, unsigned /*durationInMicroseconds*/) {
// We've just received a frame of data. (Optionally) print out information about it:
更新:
我创建了自己的 H264 Sink class: https://github.com/lucaszanella/jscam/blob/f6b38eea2934519bcccd76c8d3aee7f58793da00/src/jscam/android/app/src/main/cpp/MyH264VideoRTPSink.cpp 但它 createNew
与 DummySink
:
中的不同
createNew(UsageEnvironment& env, Groupsock* RTPgs, unsigned char rtpPayloadFormat);
根本就没有提到 RTPgs
是什么意思,rtpPayloadFormat
也没有。我什至不知道我是否在正确的轨道上...
第一个混淆是在 Source 和 Sink 之间,FAQ 简要描述一下工作流程:
'source1' -> 'source2' (a filter) -> 'source3' (a filter) -> 'sink'
classH264VideoRTPSink
是为了通过RTP发布数据,而不是消费数据。
在 RTSP 客户端示例 testRTSPClient.cpp 的情况下,创建依赖于编解码器的源处理调用 MediaSession::createNew
的 DESCRIBE 应答。
Sink 不依赖于编解码器,MediaSink
上的 startPlaying
方法注册回调 afterGettingFrame
,当数据将被源接收时调用。接下来,当执行此回调时,您应该调用 continuePlaying
为下一个传入数据再次注册它。
在 DummySink::afterGettingFrame
中,缓冲区包含从 RTP 缓冲区中提取的 H264 基本流帧。
为了转储 H264 基本流帧,您可以查看 h264bitstream
根本就没有完整的例子。在 live555 文件夹中,有以下程序:testRTSPClient.cpp,它访问 RTSP 并接收原始 RTP 数据包,但不对它们执行任何操作。它通过 DummySink
class.
有个example on how to use testRTSPClient.cpp
to receive NAL units from h264, but live555 has custom sink classes specifically for each codec, so it's a lot better to use them. Example: H264or5VideoRTPSink.cpp.
因此,如果我用 testRTSPClient.cpp
中 H264or5VideoRTPSink
的子 class 实例替换 DummySink
的实例,并使该子class 接收我认为可能有用的框架。
如果我只是按照DummySink
的实现我只需要这样写:
class MyH264VideoRTPSink: public H264VideoRTPSink {
public:
static MyH264VideoRTPSink* createNew(UsageEnvironment& env,
MediaSubsession& subsession, // identifies the kind of data that's being received
char const* streamId = NULL); // identifies the stream itself (optional)
private:
MyH264VideoRTPSink(UsageEnvironment& env, MediaSubsession& subsession, char const* streamId);
// called only by "createNew()"
virtual ~MyH264VideoRTPSink();
static void afterGettingFrame(void* clientData, unsigned frameSize,
unsigned numTruncatedBytes,
struct timeval presentationTime,
unsigned durationInMicroseconds);
void afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
struct timeval presentationTime, unsigned durationInMicroseconds);
// redefined virtual functions:
virtual Boolean continuePlaying();
u_int8_t* fReceiveBuffer;
MediaSubsession& fSubsession;
char* fStreamId;
};
如果我们查看 DummySink
,它表明 afterGettingFrame
是接收帧的函数。但是帧是从哪里接收的呢?我怎样才能访问它?
void DummySink::afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
struct timeval presentationTime, unsigned /*durationInMicroseconds*/) {
// We've just received a frame of data. (Optionally) print out information about it:
更新:
我创建了自己的 H264 Sink class: https://github.com/lucaszanella/jscam/blob/f6b38eea2934519bcccd76c8d3aee7f58793da00/src/jscam/android/app/src/main/cpp/MyH264VideoRTPSink.cpp 但它 createNew
与 DummySink
:
createNew(UsageEnvironment& env, Groupsock* RTPgs, unsigned char rtpPayloadFormat);
根本就没有提到 RTPgs
是什么意思,rtpPayloadFormat
也没有。我什至不知道我是否在正确的轨道上...
第一个混淆是在 Source 和 Sink 之间,FAQ 简要描述一下工作流程:
'source1' -> 'source2' (a filter) -> 'source3' (a filter) -> 'sink'
classH264VideoRTPSink
是为了通过RTP发布数据,而不是消费数据。
在 RTSP 客户端示例 testRTSPClient.cpp 的情况下,创建依赖于编解码器的源处理调用 MediaSession::createNew
的 DESCRIBE 应答。
Sink 不依赖于编解码器,MediaSink
上的 startPlaying
方法注册回调 afterGettingFrame
,当数据将被源接收时调用。接下来,当执行此回调时,您应该调用 continuePlaying
为下一个传入数据再次注册它。
在 DummySink::afterGettingFrame
中,缓冲区包含从 RTP 缓冲区中提取的 H264 基本流帧。
为了转储 H264 基本流帧,您可以查看 h264bitstream