AVAudioPlayerNode.scheduleFile() 的 completionHandler 被过早调用
completionHandler of AVAudioPlayerNode.scheduleFile() is called too early
我正在尝试使用 iOS 8 中的新 AVAudioEngine。
看起来 player.scheduleFile() 的 completionHandler 在 声音文件播放完毕之前被调用了。
我正在使用一个长度为 5 秒的声音文件 -- println()
-消息出现在声音结束前大约 1 秒。
我是做错了什么还是误解了 completionHandler 的概念?
谢谢!
这是一些代码:
class SoundHandler {
let engine:AVAudioEngine
let player:AVAudioPlayerNode
let mainMixer:AVAudioMixerNode
init() {
engine = AVAudioEngine()
player = AVAudioPlayerNode()
engine.attachNode(player)
mainMixer = engine.mainMixerNode
var error:NSError?
if !engine.startAndReturnError(&error) {
if let e = error {
println("error \(e.localizedDescription)")
}
}
engine.connect(player, to: mainMixer, format: mainMixer.outputFormatForBus(0))
}
func playSound() {
var soundUrl = NSBundle.mainBundle().URLForResource("Test", withExtension: "m4a")
var soundFile = AVAudioFile(forReading: soundUrl, error: nil)
player.scheduleFile(soundFile, atTime: nil, completionHandler: { println("Finished!") })
player.play()
}
}
我看到了同样的行为。
根据我的实验,我相信回调是在 buffer/segment/file 已经 "scheduled" 时调用的,而不是在播放结束时调用的。
尽管文档明确指出:
"Called after the buffer has completely played or the player is stopped. May be nil."
所以我认为这是错误或文档不正确。不知道哪个
iOS 8 天前的 AVAudioEngine 文档一定是错的。同时,作为一种解决方法,我注意到如果您改为使用 scheduleBuffer:atTime:options:completionHandler:
,回调将按预期触发(播放完成后)。
示例代码:
AVAudioFile *file = [[AVAudioFile alloc] initForReading:_fileURL commonFormat:AVAudioPCMFormatFloat32 interleaved:NO error:nil];
AVAudioPCMBuffer *buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:file.processingFormat frameCapacity:(AVAudioFrameCount)file.length];
[file readIntoBuffer:buffer error:&error];
[_player scheduleBuffer:buffer atTime:nil options:AVAudioPlayerNodeBufferInterrupts completionHandler:^{
// reminder: we're not on the main thread in here
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"done playing, as expected!");
});
}];
您始终可以使用 AVAudioTime 计算音频播放完成的未来时间。当前行为很有用,因为它支持在当前 buffer/segment/file 结束之前安排额外的 buffers/segments/files 从回调播放,避免音频播放出现间隙。这使您无需大量工作即可创建简单的循环播放器。这是一个例子:
class Latch {
var value : Bool = true
}
func loopWholeFile(file : AVAudioFile, player : AVAudioPlayerNode) -> Latch {
let looping = Latch()
let frames = file.length
let sampleRate = file.processingFormat.sampleRate
var segmentTime : AVAudioFramePosition = 0
var segmentCompletion : AVAudioNodeCompletionHandler!
segmentCompletion = {
if looping.value {
segmentTime += frames
player.scheduleFile(file, atTime: AVAudioTime(sampleTime: segmentTime, atRate: sampleRate), completionHandler: segmentCompletion)
}
}
player.scheduleFile(file, atTime: AVAudioTime(sampleTime: segmentTime, atRate: sampleRate), completionHandler: segmentCompletion)
segmentCompletion()
player.play()
return looping
}
上面的代码在调用 player.play() 之前对整个文件进行了两次调度。当每个片段接近完成时,它会在未来安排另一个完整文件,以避免播放间隙。要停止循环,您可以使用 return 值,一个锁存器,如下所示:
let looping = loopWholeFile(file, player)
sleep(1000)
looping.value = false
player.stop()
是的,它确实在文件(或缓冲区)完成之前被调用。如果您从完成处理程序中调用 [myNode stop],文件(或缓冲区)将不会完全完成。但是,如果您调用 [myEngine stop],文件(或缓冲区)将完成到最后
我的错误报告已关闭 "works as intended," 但 Apple 向我指出了 iOS 11 中 scheduleFile、scheduleSegment 和 scheduleBuffer 方法的新变体。它们添加了一个您可以使用的 completionCallbackType 参数指定播放完成时要完成回调:
[self.audioUnitPlayer
scheduleSegment:self.audioUnitFile
startingFrame:sampleTime
frameCount:(int)sampleLength
atTime:0
completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack
completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) {
// do something here
}];
documentation 没有说明它是如何工作的,但我测试了它,它对我有用。
我一直在使用此解决方法 iOS 8-10:
- (void)playRecording {
[self.audioUnitPlayer scheduleSegment:self.audioUnitFile startingFrame:sampleTime frameCount:(int)sampleLength atTime:0 completionHandler:^() {
float totalTime = [self recordingDuration];
float elapsedTime = [self recordingCurrentTime];
float remainingTime = totalTime - elapsedTime;
[self performSelector:@selector(doSomethingHere) withObject:nil afterDelay:remainingTime];
}];
}
- (float)recordingDuration {
float duration = duration = self.audioUnitFile.length / self.audioUnitFile.processingFormat.sampleRate;
if (isnan(duration)) {
duration = 0;
}
return duration;
}
- (float)recordingCurrentTime {
AVAudioTime *nodeTime = self.audioUnitPlayer.lastRenderTime;
AVAudioTime *playerTime = [self.audioUnitPlayer playerTimeForNodeTime:nodeTime];
AVAudioFramePosition sampleTime = playerTime.sampleTime;
if (sampleTime == 0) { return self.audioUnitLastKnownTime; } // this happens when the player isn't playing
sampleTime += self.audioUnitStartingFrame; // if we trimmed from the start, or changed the location with the location slider, the time before that point won't be included in the player time, so we have to track it ourselves and add it here
float time = sampleTime / self.audioUnitFile.processingFormat.sampleRate;
self.audioUnitLastKnownTime = time;
return time;
}
// audioFile here is our original audio
audioPlayerNode.scheduleFile(audioFile, at: nil, completionHandler: {
print("scheduleFile Complete")
var delayInSeconds: Double = 0
if let lastRenderTime = self.audioPlayerNode.lastRenderTime, let playerTime = self.audioPlayerNode.playerTime(forNodeTime: lastRenderTime) {
if let rate = rate {
delayInSeconds = Double(audioFile.length - playerTime.sampleTime) / Double(audioFile.processingFormat.sampleRate) / Double(rate!)
} else {
delayInSeconds = Double(audioFile.length - playerTime.sampleTime) / Double(audioFile.processingFormat.sampleRate)
}
}
// schedule a stop timer for when audio finishes playing
DispatchTime.executeAfter(seconds: delayInSeconds) {
audioEngine.mainMixerNode.removeTap(onBus: 0)
// Playback has completed
}
})
截至今天,在部署目标为 12.4 的项目中,在设备 运行 12.4.1 上,以下是我们发现在播放完成后成功停止节点的方法:
// audioFile and playerNode created here ...
playerNode.scheduleFile(audioFile, at: nil, completionCallbackType: .dataPlayedBack) { _ in
os_log(.debug, log: self.log, "%@", "Completing playing sound effect: \(filePath) ...")
DispatchQueue.main.async {
os_log(.debug, log: self.log, "%@", "... now actually completed: \(filePath)")
self.engine.disconnectNodeOutput(playerNode)
self.engine.detach(playerNode)
}
}
主要区别w.r.t。以前的答案是推迟在主线程上分离节点(我猜这也是音频渲染线程?),而不是在回调线程上执行。
我正在尝试使用 iOS 8 中的新 AVAudioEngine。
看起来 player.scheduleFile() 的 completionHandler 在 声音文件播放完毕之前被调用了。
我正在使用一个长度为 5 秒的声音文件 -- println()
-消息出现在声音结束前大约 1 秒。
我是做错了什么还是误解了 completionHandler 的概念?
谢谢!
这是一些代码:
class SoundHandler {
let engine:AVAudioEngine
let player:AVAudioPlayerNode
let mainMixer:AVAudioMixerNode
init() {
engine = AVAudioEngine()
player = AVAudioPlayerNode()
engine.attachNode(player)
mainMixer = engine.mainMixerNode
var error:NSError?
if !engine.startAndReturnError(&error) {
if let e = error {
println("error \(e.localizedDescription)")
}
}
engine.connect(player, to: mainMixer, format: mainMixer.outputFormatForBus(0))
}
func playSound() {
var soundUrl = NSBundle.mainBundle().URLForResource("Test", withExtension: "m4a")
var soundFile = AVAudioFile(forReading: soundUrl, error: nil)
player.scheduleFile(soundFile, atTime: nil, completionHandler: { println("Finished!") })
player.play()
}
}
我看到了同样的行为。
根据我的实验,我相信回调是在 buffer/segment/file 已经 "scheduled" 时调用的,而不是在播放结束时调用的。
尽管文档明确指出: "Called after the buffer has completely played or the player is stopped. May be nil."
所以我认为这是错误或文档不正确。不知道哪个
iOS 8 天前的 AVAudioEngine 文档一定是错的。同时,作为一种解决方法,我注意到如果您改为使用 scheduleBuffer:atTime:options:completionHandler:
,回调将按预期触发(播放完成后)。
示例代码:
AVAudioFile *file = [[AVAudioFile alloc] initForReading:_fileURL commonFormat:AVAudioPCMFormatFloat32 interleaved:NO error:nil];
AVAudioPCMBuffer *buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:file.processingFormat frameCapacity:(AVAudioFrameCount)file.length];
[file readIntoBuffer:buffer error:&error];
[_player scheduleBuffer:buffer atTime:nil options:AVAudioPlayerNodeBufferInterrupts completionHandler:^{
// reminder: we're not on the main thread in here
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"done playing, as expected!");
});
}];
您始终可以使用 AVAudioTime 计算音频播放完成的未来时间。当前行为很有用,因为它支持在当前 buffer/segment/file 结束之前安排额外的 buffers/segments/files 从回调播放,避免音频播放出现间隙。这使您无需大量工作即可创建简单的循环播放器。这是一个例子:
class Latch {
var value : Bool = true
}
func loopWholeFile(file : AVAudioFile, player : AVAudioPlayerNode) -> Latch {
let looping = Latch()
let frames = file.length
let sampleRate = file.processingFormat.sampleRate
var segmentTime : AVAudioFramePosition = 0
var segmentCompletion : AVAudioNodeCompletionHandler!
segmentCompletion = {
if looping.value {
segmentTime += frames
player.scheduleFile(file, atTime: AVAudioTime(sampleTime: segmentTime, atRate: sampleRate), completionHandler: segmentCompletion)
}
}
player.scheduleFile(file, atTime: AVAudioTime(sampleTime: segmentTime, atRate: sampleRate), completionHandler: segmentCompletion)
segmentCompletion()
player.play()
return looping
}
上面的代码在调用 player.play() 之前对整个文件进行了两次调度。当每个片段接近完成时,它会在未来安排另一个完整文件,以避免播放间隙。要停止循环,您可以使用 return 值,一个锁存器,如下所示:
let looping = loopWholeFile(file, player)
sleep(1000)
looping.value = false
player.stop()
是的,它确实在文件(或缓冲区)完成之前被调用。如果您从完成处理程序中调用 [myNode stop],文件(或缓冲区)将不会完全完成。但是,如果您调用 [myEngine stop],文件(或缓冲区)将完成到最后
我的错误报告已关闭 "works as intended," 但 Apple 向我指出了 iOS 11 中 scheduleFile、scheduleSegment 和 scheduleBuffer 方法的新变体。它们添加了一个您可以使用的 completionCallbackType 参数指定播放完成时要完成回调:
[self.audioUnitPlayer
scheduleSegment:self.audioUnitFile
startingFrame:sampleTime
frameCount:(int)sampleLength
atTime:0
completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack
completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) {
// do something here
}];
documentation 没有说明它是如何工作的,但我测试了它,它对我有用。
我一直在使用此解决方法 iOS 8-10:
- (void)playRecording {
[self.audioUnitPlayer scheduleSegment:self.audioUnitFile startingFrame:sampleTime frameCount:(int)sampleLength atTime:0 completionHandler:^() {
float totalTime = [self recordingDuration];
float elapsedTime = [self recordingCurrentTime];
float remainingTime = totalTime - elapsedTime;
[self performSelector:@selector(doSomethingHere) withObject:nil afterDelay:remainingTime];
}];
}
- (float)recordingDuration {
float duration = duration = self.audioUnitFile.length / self.audioUnitFile.processingFormat.sampleRate;
if (isnan(duration)) {
duration = 0;
}
return duration;
}
- (float)recordingCurrentTime {
AVAudioTime *nodeTime = self.audioUnitPlayer.lastRenderTime;
AVAudioTime *playerTime = [self.audioUnitPlayer playerTimeForNodeTime:nodeTime];
AVAudioFramePosition sampleTime = playerTime.sampleTime;
if (sampleTime == 0) { return self.audioUnitLastKnownTime; } // this happens when the player isn't playing
sampleTime += self.audioUnitStartingFrame; // if we trimmed from the start, or changed the location with the location slider, the time before that point won't be included in the player time, so we have to track it ourselves and add it here
float time = sampleTime / self.audioUnitFile.processingFormat.sampleRate;
self.audioUnitLastKnownTime = time;
return time;
}
// audioFile here is our original audio
audioPlayerNode.scheduleFile(audioFile, at: nil, completionHandler: {
print("scheduleFile Complete")
var delayInSeconds: Double = 0
if let lastRenderTime = self.audioPlayerNode.lastRenderTime, let playerTime = self.audioPlayerNode.playerTime(forNodeTime: lastRenderTime) {
if let rate = rate {
delayInSeconds = Double(audioFile.length - playerTime.sampleTime) / Double(audioFile.processingFormat.sampleRate) / Double(rate!)
} else {
delayInSeconds = Double(audioFile.length - playerTime.sampleTime) / Double(audioFile.processingFormat.sampleRate)
}
}
// schedule a stop timer for when audio finishes playing
DispatchTime.executeAfter(seconds: delayInSeconds) {
audioEngine.mainMixerNode.removeTap(onBus: 0)
// Playback has completed
}
})
截至今天,在部署目标为 12.4 的项目中,在设备 运行 12.4.1 上,以下是我们发现在播放完成后成功停止节点的方法:
// audioFile and playerNode created here ...
playerNode.scheduleFile(audioFile, at: nil, completionCallbackType: .dataPlayedBack) { _ in
os_log(.debug, log: self.log, "%@", "Completing playing sound effect: \(filePath) ...")
DispatchQueue.main.async {
os_log(.debug, log: self.log, "%@", "... now actually completed: \(filePath)")
self.engine.disconnectNodeOutput(playerNode)
self.engine.detach(playerNode)
}
}
主要区别w.r.t。以前的答案是推迟在主线程上分离节点(我猜这也是音频渲染线程?),而不是在回调线程上执行。