从 AudioInputIOProc 创建 CMSampleBufferRef

Create CMSampleBufferRef from an AudioInputIOProc

我有一个 AudioInputIOProc,我正在从中得到一个 AudioBufferList。我需要将此 AudioBufferList 转换为 CMSampleBufferRef.

这是我到目前为止编写的代码:

- (void)handleAudioSamples:(const AudioBufferList*)samples numSamples:(UInt32)numSamples hostTime:(UInt64)hostTime {
// Create a CMSampleBufferRef from the list of samples, which we'll own

  AudioStreamBasicDescription monoStreamFormat;
  memset(&monoStreamFormat, 0, sizeof(monoStreamFormat));
  monoStreamFormat.mSampleRate = 44100;
  monoStreamFormat.mFormatID = kAudioFormatMPEG4AAC;
  monoStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved;
  monoStreamFormat.mBytesPerPacket = 4;
  monoStreamFormat.mFramesPerPacket = 1;
  monoStreamFormat.mBytesPerFrame = 4;
  monoStreamFormat.mChannelsPerFrame = 2;
  monoStreamFormat.mBitsPerChannel = 16;

  CMFormatDescriptionRef format = NULL;
  OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &monoStreamFormat, 0, NULL, 0, NULL, NULL, &format);
  if (status != noErr) {
    // really shouldn't happen
    return;
  }

  mach_timebase_info_data_t tinfo;
  mach_timebase_info(&tinfo);

  UInt64 _hostTimeToNSFactor = (double)tinfo.numer / tinfo.denom;

  uint64_t timeNS = (uint64_t)(hostTime * _hostTimeToNSFactor);
  CMTime presentationTime = CMTimeMake(timeNS, 1000000000);
  CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid };

  CMSampleBufferRef sampleBuffer = NULL;
  status = CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numSamples, 1, &timing, 0, NULL, &sampleBuffer);
  if (status != noErr) {
    // couldn't create the sample buffer
    NSLog(@"Failed to create sample buffer");
    CFRelease(format);
    return;
  }

  // add the samples to the buffer
  status = CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer,
                                                        kCFAllocatorDefault,
                                                        kCFAllocatorDefault,
                                                        0,
                                                        samples);
  if (status != noErr) {
    NSLog(@"Failed to add samples to sample buffer");
    CFRelease(sampleBuffer);
    CFRelease(format);
    NSLog(@"Error status code: %d", status);
    return;
  }

  [self addAudioFrame:sampleBuffer];

  NSLog(@"Original sample buf size: %ld for %d samples from %d buffers, first buffer has size %d", CMSampleBufferGetTotalSampleSize(sampleBuffer), numSamples, samples->mNumberBuffers, samples->mBuffers[0].mDataByteSize);
  NSLog(@"Original sample buf has %ld samples", CMSampleBufferGetNumSamples(sampleBuffer));
}

现在,我不确定如何根据 AudioInputIOProc 的函数定义计算 numSamples:

OSStatus AudioTee::InputIOProc(AudioDeviceID inDevice, const AudioTimeStamp *inNow, const AudioBufferList *inInputData, const AudioTimeStamp *inInputTime, AudioBufferList *outOutputData, const AudioTimeStamp *inOutputTime, void *inClientData)

此定义存在于 WavTap 的 AudioTee.cpp 文件中。

我遇到的错误是 CMSampleBufferError_RequiredParameterMissing 错误,当我尝试调用 CMSampleBufferSetDataBufferFromAudioBufferList.

时,错误代码为 -12731

更新:

为了澄清问题,以下是我从 AudioDeviceIOProc 获取的音频数据的格式:

Channels: 2, Sample Rate: 44100, Precision: 32-bit, Sample Encoding: 32-bit Signed Integer PCM, Endian Type: little, Reverse Nibbles: no, Reverse Bits: no

我得到一个 AudioBufferList*,其中包含我需要转换为 CMSampleBufferRef* 的所有音频数据(30 秒视频),并将这些样本缓冲区添加到视频(这是 30 秒长)正在通过 AVAssetWriterInput.

写入磁盘

三处看起来不对:

  1. 你声明格式ID为kAudioFormatMPEG4AAC,但配置为LPCM。所以试试

    monoStreamFormat.mFormatID = kAudioFormatLinearPCM;

    当配置为立体声时,您还可以调用格式 "mono"。

  2. 为什么要使用 mach_timebase_info 这可能会在您的音频演示时间戳中留下空白?改用样本计数:

    CMTime presentationTime = CMTimeMake(numSamplesProcessed, 44100);

  3. 您的 CMSampleTimingInfo 看起来不对,您没有使用 presentationTime。您将缓冲区的持续时间设置为 1 个样本长度,当它可以是 numSamples 并且它的呈现时间为零时,这是不正确的。这样的事情会更有意义:

    CMSampleTimingInfo timing = { CMTimeMake(numSamples, 44100), presentationTime, kCMTimeInvalid };

还有一些问题:

你的 AudioBufferList 是否达到预期的 2 AudioBuffers? 你有这个的可运行版本吗?

p.s。我自己对此感到内疚,但是在音频线程上分配内存是 considered harmful in audio dev.