尝试使用样本缓冲区作为输入来设置音频单元图
Trying to setup an audio unit graph with a buffer of samples as the input
我正在尝试实现一个简单的音频单元图:
样本缓冲区->低通滤波器->通用输出
通用输出将被复制到一个新的缓冲区中,然后可以进一步处理、保存到磁盘等。
我可以在网上找到的所有与设置音频单元图有关的示例都涉及使用以 kAudioUnitSubType_AudioFilePlayer 作为输入源的生成器...我已经在处理样本缓冲区收购,所以这些例子没有帮助......基于在 AudioUnitProperties.h 文件中环顾四周,看起来我应该使用 using is kAudioUnitSubType_ScheduledSoundPlayer?
我似乎没有太多关于如何连接它的文档,所以我很困惑,希望这里有人能帮助我。
为了简化事情,我刚开始尝试让我的样本缓冲区直接进入系统输出,但我无法完成这项工作...
#import "EffectMachine.h"
#import <AudioToolbox/AudioToolbox.h>
#import "AudioHelpers.h"
#import "Buffer.h"
@interface EffectMachine ()
@property (nonatomic, strong) Buffer *buffer;
@end
typedef struct EffectMachineGraph {
AUGraph graph;
AudioUnit input;
AudioUnit lowpass;
AudioUnit output;
} EffectMachineGraph;
@implementation EffectMachine {
EffectMachineGraph machine;
}
-(instancetype)initWithBuffer:(Buffer *)buffer {
if (self = [super init]) {
self.buffer = buffer;
// buffer is a simple wrapper object that holds two properties:
// a pointer to the array of samples (as doubles) and the size (number of samples)
}
return self;
}
-(void)process {
struct EffectMachineGraph initialized = {0};
machine = initialized;
CheckError(NewAUGraph(&machine.graph),
"NewAUGraph failed");
AudioComponentDescription outputCD = {0};
outputCD.componentType = kAudioUnitType_Output;
outputCD.componentSubType = kAudioUnitSubType_DefaultOutput;
outputCD.componentManufacturer = kAudioUnitManufacturer_Apple;
AUNode outputNode;
CheckError(AUGraphAddNode(machine.graph,
&outputCD,
&outputNode),
"AUGraphAddNode[kAudioUnitSubType_GenericOutput] failed");
AudioComponentDescription inputCD = {0};
inputCD.componentType = kAudioUnitType_Generator;
inputCD.componentSubType = kAudioUnitSubType_ScheduledSoundPlayer;
inputCD.componentManufacturer = kAudioUnitManufacturer_Apple;
AUNode inputNode;
CheckError(AUGraphAddNode(machine.graph,
&inputCD,
&inputNode),
"AUGraphAddNode[kAudioUnitSubType_ScheduledSoundPlayer] failed");
CheckError(AUGraphOpen(machine.graph),
"AUGraphOpen failed");
CheckError(AUGraphNodeInfo(machine.graph,
inputNode,
NULL,
&machine.input),
"AUGraphNodeInfo failed");
CheckError(AUGraphConnectNodeInput(machine.graph,
inputNode,
0,
outputNode,
0),
"AUGraphConnectNodeInput");
CheckError(AUGraphInitialize(machine.graph),
"AUGraphInitialize failed");
// prepare input
AudioBufferList ioData = {0};
ioData.mNumberBuffers = 1;
ioData.mBuffers[0].mNumberChannels = 1;
ioData.mBuffers[0].mDataByteSize = (UInt32)(2 * self.buffer.size);
ioData.mBuffers[0].mData = self.buffer.samples;
ScheduledAudioSlice slice = {0};
AudioTimeStamp timeStamp = {0};
slice.mTimeStamp = timeStamp;
slice.mNumberFrames = (UInt32)self.buffer.size;
slice.mBufferList = &ioData;
CheckError(AudioUnitSetProperty(machine.input,
kAudioUnitProperty_ScheduleAudioSlice,
kAudioUnitScope_Global,
0,
&slice,
sizeof(slice)),
"AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp] failed");
AudioTimeStamp startTimeStamp = {0};
startTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
startTimeStamp.mSampleTime = -1;
CheckError(AudioUnitSetProperty(machine.input,
kAudioUnitProperty_ScheduleStartTimeStamp,
kAudioUnitScope_Global,
0,
&startTimeStamp,
sizeof(startTimeStamp)),
"AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp] failed");
CheckError(AUGraphStart(machine.graph),
"AUGraphStart failed");
// AUGraphStop(machine.graph); <-- commented out to make sure it wasn't stopping before actually finishing playing.
// AUGraphUninitialize(machine.graph);
// AUGraphClose(machine.graph);
}
有人知道我做错了什么吗?
我想这就是您要找的documentation。
总而言之:设置您的 augraph,设置您的音频单元并将它们添加到图中,在图中的第一个节点上编写并附加 rendercallback 函数。 运行 图表。请注意,rendercallback 是要求您的应用程序向 augraph 提供样本缓冲区的地方。这是您需要从缓冲区中读取并填充由 rendercallback 提供的缓冲区的地方。我想这就是你所缺少的。
如果你在 iOS8,我推荐 AVAudioEngine,这有助于隐藏图表和效果的一些粗糙的锅炉板细节
额外:
- 在 github
iOS8 example code 之前完成
- iOS Music player app 将音频从 MP3 库读取到循环缓冲区中,然后通过 augraph(使用混音器和 eq AU)对其进行处理。您可以看到如何设置 rendercallback 以从缓冲区等读取数据。
- Amazing Audio Engine
- Novocaine Audio library
我正在尝试实现一个简单的音频单元图:
样本缓冲区->低通滤波器->通用输出
通用输出将被复制到一个新的缓冲区中,然后可以进一步处理、保存到磁盘等。
我可以在网上找到的所有与设置音频单元图有关的示例都涉及使用以 kAudioUnitSubType_AudioFilePlayer 作为输入源的生成器...我已经在处理样本缓冲区收购,所以这些例子没有帮助......基于在 AudioUnitProperties.h 文件中环顾四周,看起来我应该使用 using is kAudioUnitSubType_ScheduledSoundPlayer?
我似乎没有太多关于如何连接它的文档,所以我很困惑,希望这里有人能帮助我。
为了简化事情,我刚开始尝试让我的样本缓冲区直接进入系统输出,但我无法完成这项工作...
#import "EffectMachine.h"
#import <AudioToolbox/AudioToolbox.h>
#import "AudioHelpers.h"
#import "Buffer.h"
@interface EffectMachine ()
@property (nonatomic, strong) Buffer *buffer;
@end
typedef struct EffectMachineGraph {
AUGraph graph;
AudioUnit input;
AudioUnit lowpass;
AudioUnit output;
} EffectMachineGraph;
@implementation EffectMachine {
EffectMachineGraph machine;
}
-(instancetype)initWithBuffer:(Buffer *)buffer {
if (self = [super init]) {
self.buffer = buffer;
// buffer is a simple wrapper object that holds two properties:
// a pointer to the array of samples (as doubles) and the size (number of samples)
}
return self;
}
-(void)process {
struct EffectMachineGraph initialized = {0};
machine = initialized;
CheckError(NewAUGraph(&machine.graph),
"NewAUGraph failed");
AudioComponentDescription outputCD = {0};
outputCD.componentType = kAudioUnitType_Output;
outputCD.componentSubType = kAudioUnitSubType_DefaultOutput;
outputCD.componentManufacturer = kAudioUnitManufacturer_Apple;
AUNode outputNode;
CheckError(AUGraphAddNode(machine.graph,
&outputCD,
&outputNode),
"AUGraphAddNode[kAudioUnitSubType_GenericOutput] failed");
AudioComponentDescription inputCD = {0};
inputCD.componentType = kAudioUnitType_Generator;
inputCD.componentSubType = kAudioUnitSubType_ScheduledSoundPlayer;
inputCD.componentManufacturer = kAudioUnitManufacturer_Apple;
AUNode inputNode;
CheckError(AUGraphAddNode(machine.graph,
&inputCD,
&inputNode),
"AUGraphAddNode[kAudioUnitSubType_ScheduledSoundPlayer] failed");
CheckError(AUGraphOpen(machine.graph),
"AUGraphOpen failed");
CheckError(AUGraphNodeInfo(machine.graph,
inputNode,
NULL,
&machine.input),
"AUGraphNodeInfo failed");
CheckError(AUGraphConnectNodeInput(machine.graph,
inputNode,
0,
outputNode,
0),
"AUGraphConnectNodeInput");
CheckError(AUGraphInitialize(machine.graph),
"AUGraphInitialize failed");
// prepare input
AudioBufferList ioData = {0};
ioData.mNumberBuffers = 1;
ioData.mBuffers[0].mNumberChannels = 1;
ioData.mBuffers[0].mDataByteSize = (UInt32)(2 * self.buffer.size);
ioData.mBuffers[0].mData = self.buffer.samples;
ScheduledAudioSlice slice = {0};
AudioTimeStamp timeStamp = {0};
slice.mTimeStamp = timeStamp;
slice.mNumberFrames = (UInt32)self.buffer.size;
slice.mBufferList = &ioData;
CheckError(AudioUnitSetProperty(machine.input,
kAudioUnitProperty_ScheduleAudioSlice,
kAudioUnitScope_Global,
0,
&slice,
sizeof(slice)),
"AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp] failed");
AudioTimeStamp startTimeStamp = {0};
startTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
startTimeStamp.mSampleTime = -1;
CheckError(AudioUnitSetProperty(machine.input,
kAudioUnitProperty_ScheduleStartTimeStamp,
kAudioUnitScope_Global,
0,
&startTimeStamp,
sizeof(startTimeStamp)),
"AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp] failed");
CheckError(AUGraphStart(machine.graph),
"AUGraphStart failed");
// AUGraphStop(machine.graph); <-- commented out to make sure it wasn't stopping before actually finishing playing.
// AUGraphUninitialize(machine.graph);
// AUGraphClose(machine.graph);
}
有人知道我做错了什么吗?
我想这就是您要找的documentation。
总而言之:设置您的 augraph,设置您的音频单元并将它们添加到图中,在图中的第一个节点上编写并附加 rendercallback 函数。 运行 图表。请注意,rendercallback 是要求您的应用程序向 augraph 提供样本缓冲区的地方。这是您需要从缓冲区中读取并填充由 rendercallback 提供的缓冲区的地方。我想这就是你所缺少的。
如果你在 iOS8,我推荐 AVAudioEngine,这有助于隐藏图表和效果的一些粗糙的锅炉板细节
额外:
- 在 github iOS8 example code 之前完成
- iOS Music player app 将音频从 MP3 库读取到循环缓冲区中,然后通过 augraph(使用混音器和 eq AU)对其进行处理。您可以看到如何设置 rendercallback 以从缓冲区等读取数据。
- Amazing Audio Engine
- Novocaine Audio library