AVAssetWriterInput appendSampleBuffer 成功,但从 CMSampleBufferGetSampleSize 记录错误 kCMSampleBufferError_BufferHasNoSampleSizes
AVAssetWriterInput appendSampleBuffer succeeds, but logs error kCMSampleBufferError_BufferHasNoSampleSizes from CMSampleBufferGetSampleSize
从 iOS 12.4 beta 版本开始,在 AVAssetWriterInput
上调用 appendSampleBuffer
会记录以下错误:
CMSampleBufferGetSampleSize signalled err=-12735 (kCMSampleBufferError_BufferHasNoSampleSizes) (sbuf->numSampleSizeEntries == 0) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2290.12/Sources/Core/FigSampleBuffer/FigSampleBuffer.c:4153
我们在之前的版本中没有看到这个错误,在 iOS 13 beta 中也没有。
有没有其他人遇到过这个问题,可以提供任何信息来帮助我们解决这个问题?
更多详情
我们的应用正在录制视频和音频,使用两个 AVAssetWriterInput
对象,一个用于视频输入(附加像素缓冲区),一个用于音频输入 - 附加使用 CMSampleBufferCreate
创建的音频缓冲区。 (见下面的代码。)
由于我们的音频数据是非交错的,创建后我们将其转换为交错格式,并将其传递给appendSampleBuffer
。
相关代码
// Creating the audio buffer:
CMSampleBufferRef buff = NULL;
CMSampleTimingInfo timing = {
CMTimeMake(1, _asbdFormat.mSampleRate),
currentAudioTime,
kCMTimeInvalid };
OSStatus status = CMSampleBufferCreate(kCFAllocatorDefault,
NULL,
false,
NULL,
NULL,
_cmFormat,
(CMItemCount)(*inNumberFrames),
1,
&timing,
0,
NULL,
&buff);
// checking for error... (non returned)
// Converting from non-interleaved to interleaved.
float zero = 0.f;
vDSP_vclr(_interleavedABL.mBuffers[0].mData, 1, numFrames * 2);
// Channel L
vDSP_vsadd(ioData->mBuffers[0].mData, 1, &zero, _interleavedABL.mBuffers[0].mData, 2, numFrames);
// Channel R
vDSP_vsadd(ioData->mBuffers[0].mData, 1, &zero, (float*)(_interleavedABL.mBuffers[0].mData) + 1, 2, numFrames);
_interleavedABL.mBuffers[0].mDataByteSize = _interleavedASBD.mBytesPerFrame * numFrames;
status = CMSampleBufferSetDataBufferFromAudioBufferList(buff,
kCFAllocatorDefault,
kCFAllocatorDefault,
0,
&_interleavedABL);
// checking for error... (non returned)
if (_assetWriterAudioInput.readyForMoreMediaData) {
BOOL success = [_assetWriterAudioInput appendSampleBuffer:audioBuffer]; // THIS PRODUCES THE ERROR.
// success is returned true, but the above specified error is logged - on iOS 12.4 betas (not on 12.3 or before)
}
首先,下面是 _assetWriterAudioInput
的创建方式:
-(BOOL) initializeAudioWriting
{
BOOL success = YES;
NSDictionary *audioCompressionSettings = // settings dictionary, see below.
if ([_assetWriter canApplyOutputSettings:audioCompressionSettings forMediaType:AVMediaTypeAudio]) {
_assetWriterAudioInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:audioCompressionSettings];
_assetWriterAudioInput.expectsMediaDataInRealTime = YES;
if ([_assetWriter canAddInput:_assetWriterAudioInput]) {
[_assetWriter addInput:_assetWriterAudioInput];
}
else {
// return error
}
}
else {
// return error
}
return success;
}
audioCompressionSettings 定义为:
+ (NSDictionary*)audioSettingsForRecording
{
AVAudioSession *sharedAudioSession = [AVAudioSession sharedInstance];
double preferredHardwareSampleRate;
if ([sharedAudioSession respondsToSelector:@selector(sampleRate)])
{
preferredHardwareSampleRate = [sharedAudioSession sampleRate];
}
else
{
preferredHardwareSampleRate = [[AVAudioSession sharedInstance] currentHardwareSampleRate];
}
AudioChannelLayout acl;
bzero( &acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
return @{
AVFormatIDKey: @(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey: @2,
AVSampleRateKey: @(preferredHardwareSampleRate),
AVChannelLayoutKey: [ NSData dataWithBytes: &acl length: sizeof( acl ) ],
AVEncoderBitRateKey: @160000
};
}
appendSampleBuffer
记录以下错误和调用堆栈(相关部分):
CMSampleBufferGetSampleSize signalled err=-12735 (kCMSampleBufferError_BufferHasNoSampleSizes) (sbuf->numSampleSizeEntries == 0) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2290.6/Sources/Core/FigSampleBuffer/FigSampleBuffer.c:4153
0 CoreMedia 0x00000001aff75194 CMSampleBufferGetSampleSize + 268 [0x1aff34000 + 266644]
1 My App 0x0000000103212dfc -[MyClassName writeAudioFrames:audioBuffers:] + 1788 [0x102aec000 + 7499260]
...
如有任何帮助,我们将不胜感激。
编辑:添加以下信息:
我们将 0 和 NULL 传递给 CMSampleBufferCreate
的 numSampleSizeEntries
和 sampleSizeArray
参数——根据文档,这是我们在创建非交错数据缓冲区时必须传递的内容(尽管这个文档对我来说有点混乱)。
我们尝试传递 1 和一个指向 size_t 参数的指针,例如:
size_t sampleSize = 4;
但没有帮助:
它记录了一个错误:
figSampleBufferCheckDataSize 发出信号 err=-12731 (kFigSampleBufferError_RequiredParameterMissing)(bbuf 与 sbuf 数据大小不匹配)
我们不清楚应该有什么值(如何知道每个样本的样本大小),
或者这是否是正确的解决方案。
我想我们已经有了答案:
传递 numSampleSizeEntries 和 sampleSizeArray
CMSampleBufferCreate 的参数如下所示似乎可以修复它(仍需要完整验证)。
根据我的理解,原因是我们在最后附加交错缓冲区,它需要有样本大小(至少在 12.4 版本中)。
// _asbdFormat is the AudioStreamBasicDescription.
size_t sampleSize = _asbdFormat.mBytesPerFrame;
OSStatus status = CMSampleBufferCreate(kCFAllocatorDefault,
NULL,
false,
NULL,
NULL,
_cmFormat,
(CMItemCount)(*inNumberFrames),
1,
&timing,
1,
&sampleSize,
&buff);
此错误表示传递给 CMBlockBufferCreate...
和 CMSampleBufferCreate...
函数的数据长度参数不匹配。
从 iOS 12.4 beta 版本开始,在 AVAssetWriterInput
上调用 appendSampleBuffer
会记录以下错误:
CMSampleBufferGetSampleSize signalled err=-12735 (kCMSampleBufferError_BufferHasNoSampleSizes) (sbuf->numSampleSizeEntries == 0) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2290.12/Sources/Core/FigSampleBuffer/FigSampleBuffer.c:4153
我们在之前的版本中没有看到这个错误,在 iOS 13 beta 中也没有。 有没有其他人遇到过这个问题,可以提供任何信息来帮助我们解决这个问题?
更多详情
我们的应用正在录制视频和音频,使用两个 AVAssetWriterInput
对象,一个用于视频输入(附加像素缓冲区),一个用于音频输入 - 附加使用 CMSampleBufferCreate
创建的音频缓冲区。 (见下面的代码。)
由于我们的音频数据是非交错的,创建后我们将其转换为交错格式,并将其传递给appendSampleBuffer
。
相关代码
// Creating the audio buffer:
CMSampleBufferRef buff = NULL;
CMSampleTimingInfo timing = {
CMTimeMake(1, _asbdFormat.mSampleRate),
currentAudioTime,
kCMTimeInvalid };
OSStatus status = CMSampleBufferCreate(kCFAllocatorDefault,
NULL,
false,
NULL,
NULL,
_cmFormat,
(CMItemCount)(*inNumberFrames),
1,
&timing,
0,
NULL,
&buff);
// checking for error... (non returned)
// Converting from non-interleaved to interleaved.
float zero = 0.f;
vDSP_vclr(_interleavedABL.mBuffers[0].mData, 1, numFrames * 2);
// Channel L
vDSP_vsadd(ioData->mBuffers[0].mData, 1, &zero, _interleavedABL.mBuffers[0].mData, 2, numFrames);
// Channel R
vDSP_vsadd(ioData->mBuffers[0].mData, 1, &zero, (float*)(_interleavedABL.mBuffers[0].mData) + 1, 2, numFrames);
_interleavedABL.mBuffers[0].mDataByteSize = _interleavedASBD.mBytesPerFrame * numFrames;
status = CMSampleBufferSetDataBufferFromAudioBufferList(buff,
kCFAllocatorDefault,
kCFAllocatorDefault,
0,
&_interleavedABL);
// checking for error... (non returned)
if (_assetWriterAudioInput.readyForMoreMediaData) {
BOOL success = [_assetWriterAudioInput appendSampleBuffer:audioBuffer]; // THIS PRODUCES THE ERROR.
// success is returned true, but the above specified error is logged - on iOS 12.4 betas (not on 12.3 or before)
}
首先,下面是 _assetWriterAudioInput
的创建方式:
-(BOOL) initializeAudioWriting
{
BOOL success = YES;
NSDictionary *audioCompressionSettings = // settings dictionary, see below.
if ([_assetWriter canApplyOutputSettings:audioCompressionSettings forMediaType:AVMediaTypeAudio]) {
_assetWriterAudioInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:audioCompressionSettings];
_assetWriterAudioInput.expectsMediaDataInRealTime = YES;
if ([_assetWriter canAddInput:_assetWriterAudioInput]) {
[_assetWriter addInput:_assetWriterAudioInput];
}
else {
// return error
}
}
else {
// return error
}
return success;
}
audioCompressionSettings 定义为:
+ (NSDictionary*)audioSettingsForRecording
{
AVAudioSession *sharedAudioSession = [AVAudioSession sharedInstance];
double preferredHardwareSampleRate;
if ([sharedAudioSession respondsToSelector:@selector(sampleRate)])
{
preferredHardwareSampleRate = [sharedAudioSession sampleRate];
}
else
{
preferredHardwareSampleRate = [[AVAudioSession sharedInstance] currentHardwareSampleRate];
}
AudioChannelLayout acl;
bzero( &acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
return @{
AVFormatIDKey: @(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey: @2,
AVSampleRateKey: @(preferredHardwareSampleRate),
AVChannelLayoutKey: [ NSData dataWithBytes: &acl length: sizeof( acl ) ],
AVEncoderBitRateKey: @160000
};
}
appendSampleBuffer
记录以下错误和调用堆栈(相关部分):
CMSampleBufferGetSampleSize signalled err=-12735 (kCMSampleBufferError_BufferHasNoSampleSizes) (sbuf->numSampleSizeEntries == 0) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/EmbeddedCoreMediaFramework/EmbeddedCoreMedia-2290.6/Sources/Core/FigSampleBuffer/FigSampleBuffer.c:4153
0 CoreMedia 0x00000001aff75194 CMSampleBufferGetSampleSize + 268 [0x1aff34000 + 266644]
1 My App 0x0000000103212dfc -[MyClassName writeAudioFrames:audioBuffers:] + 1788 [0x102aec000 + 7499260] ...
如有任何帮助,我们将不胜感激。
编辑:添加以下信息:
我们将 0 和 NULL 传递给 CMSampleBufferCreate
的 numSampleSizeEntries
和 sampleSizeArray
参数——根据文档,这是我们在创建非交错数据缓冲区时必须传递的内容(尽管这个文档对我来说有点混乱)。
我们尝试传递 1 和一个指向 size_t 参数的指针,例如:
size_t sampleSize = 4;
但没有帮助: 它记录了一个错误:
figSampleBufferCheckDataSize 发出信号 err=-12731 (kFigSampleBufferError_RequiredParameterMissing)(bbuf 与 sbuf 数据大小不匹配)
我们不清楚应该有什么值(如何知道每个样本的样本大小), 或者这是否是正确的解决方案。
我想我们已经有了答案:
传递 numSampleSizeEntries 和 sampleSizeArray CMSampleBufferCreate 的参数如下所示似乎可以修复它(仍需要完整验证)。
根据我的理解,原因是我们在最后附加交错缓冲区,它需要有样本大小(至少在 12.4 版本中)。
// _asbdFormat is the AudioStreamBasicDescription.
size_t sampleSize = _asbdFormat.mBytesPerFrame;
OSStatus status = CMSampleBufferCreate(kCFAllocatorDefault,
NULL,
false,
NULL,
NULL,
_cmFormat,
(CMItemCount)(*inNumberFrames),
1,
&timing,
1,
&sampleSize,
&buff);
此错误表示传递给 CMBlockBufferCreate...
和 CMSampleBufferCreate...
函数的数据长度参数不匹配。