使用 AVAudioTime 安排将来播放的音频文件
Scheduling an audio file for playback in the future with AVAudioTime
我正在尝试弄清楚如何在不久的将来正确安排音频文件。我的实际目标是同步播放多首曲目。
那么如何正确配置 'aTime' 使其在大约 0.3 秒后启动。
我想我可能也需要 hostTime,但我不知道如何正确使用它
func createStartTime() -> AVAudioTime? {
var time:AVAudioTime?
if let lastPlayer = self.trackPlayerDictionary[lastPlayerKey] {
if let sampleRate = lastPlayer.file?.processingFormat.sampleRate {
var sampleTime = AVAudioFramePosition(shortStartDelay * sampleRate )
time = AVAudioTime(sampleTime: sampleTime, atRate: sampleRate)
}
}
return time
}
这是我用来开始播放的函数:
func playAtTime(aTime:AVAudioTime?){
self.startingFrame = AVAudioFramePosition(self.currentTime * self.file!.processingFormat.sampleRate)
let frameCount = AVAudioFrameCount(self.file!.length - self.startingFrame!)
self.player.scheduleSegment(self.file!, startingFrame: self.startingFrame!, frameCount: frameCount, atTime: aTime, completionHandler:{ () -> Void in
NSLog("done playing")//actually done scheduling
})
self.player.play()
}
我想通了!
我在mach_absolute_time()中填写的hostTime参数,这是computer/iPad的'now'时间。 AVAudioTime(hostTime:sampleTime:atRate) 将 sampleTime 添加到 hostTime 并在不久的将来返回一个时间,可用于在同一 startingTime
安排多个音频段
func createStartTime() -> AVAudioTime? {
var time:AVAudioTime?
if let lastPlayer = self.trackPlayerDictionary[lastPlayerKey] {
if let sampleRate = lastPlayer.file?.processingFormat.sampleRate {
var sampleTime = AVAudioFramePosition(shortStartDelay * sampleRate )
time = AVAudioTime(hostTime: mach_absolute_time(), sampleTime: sampleTime, atRate: sampleRate)
}
}
return time
}
好吧 - 它是 ObjC - 但你会明白重点的...
不需要 mach_absolute_time() - 如果你的引擎是 运行ning 你已经有一个 @属性 AVAudioNode 中的 lastRenderTime - 你的播放器的 superclass ...
AVAudioFormat *outputFormat = [playerA outputFormatForBus:0];
const float kStartDelayTime = 0.0; // seconds - in case you wanna delay the start
AVAudioFramePosition startSampleTime = playerA.lastRenderTime.sampleTime;
AVAudioTime *startTime = [AVAudioTime timeWithSampleTime:(startSampleTime + (kStartDelayTime * outputFormat.sampleRate)) atRate:outputFormat.sampleRate];
[playerA playAtTime: startTime];
[playerB playAtTime: startTime];
[playerC playAtTime: startTime];
[playerD playAtTime: startTime];
[player...
顺便说一句 - 您可以使用 AVAudioPlayer 获得相同的 100% 采样帧准确结果 class...
NSTimeInterval startDelayTime = 0.0; // seconds - in case you wanna delay the start
NSTimeInterval now = playerA.deviceCurrentTime;
NSTimeInterval startTime = now + startDelayTime;
[playerA playAtTime: startTime];
[playerB playAtTime: startTime];
[playerC playAtTime: startTime];
[playerD playAtTime: startTime];
[player...
在没有 startDelayTime 的情况下,所有玩家的前 100-200 毫秒将被截断,因为尽管玩家已经开始(嗯,已安排),但开始命令实际上将其时间用于 运行 循环 100% 现在同步。但是有了 startDelayTime = 0.25 你就可以开始了。永远不要忘记提前 prepareToPlay 你的玩家,这样在开始时就不需要做额外的缓冲或设置 - 只是让他们开始 ;-)
如需更深入的解释,请查看我在
中的回答
AVAudioEngine multiple AVAudioInputNodes do not play in perfect sync
我正在尝试弄清楚如何在不久的将来正确安排音频文件。我的实际目标是同步播放多首曲目。
那么如何正确配置 'aTime' 使其在大约 0.3 秒后启动。 我想我可能也需要 hostTime,但我不知道如何正确使用它
func createStartTime() -> AVAudioTime? {
var time:AVAudioTime?
if let lastPlayer = self.trackPlayerDictionary[lastPlayerKey] {
if let sampleRate = lastPlayer.file?.processingFormat.sampleRate {
var sampleTime = AVAudioFramePosition(shortStartDelay * sampleRate )
time = AVAudioTime(sampleTime: sampleTime, atRate: sampleRate)
}
}
return time
}
这是我用来开始播放的函数:
func playAtTime(aTime:AVAudioTime?){
self.startingFrame = AVAudioFramePosition(self.currentTime * self.file!.processingFormat.sampleRate)
let frameCount = AVAudioFrameCount(self.file!.length - self.startingFrame!)
self.player.scheduleSegment(self.file!, startingFrame: self.startingFrame!, frameCount: frameCount, atTime: aTime, completionHandler:{ () -> Void in
NSLog("done playing")//actually done scheduling
})
self.player.play()
}
我想通了!
我在mach_absolute_time()中填写的hostTime参数,这是computer/iPad的'now'时间。 AVAudioTime(hostTime:sampleTime:atRate) 将 sampleTime 添加到 hostTime 并在不久的将来返回一个时间,可用于在同一 startingTime
安排多个音频段func createStartTime() -> AVAudioTime? {
var time:AVAudioTime?
if let lastPlayer = self.trackPlayerDictionary[lastPlayerKey] {
if let sampleRate = lastPlayer.file?.processingFormat.sampleRate {
var sampleTime = AVAudioFramePosition(shortStartDelay * sampleRate )
time = AVAudioTime(hostTime: mach_absolute_time(), sampleTime: sampleTime, atRate: sampleRate)
}
}
return time
}
好吧 - 它是 ObjC - 但你会明白重点的...
不需要 mach_absolute_time() - 如果你的引擎是 运行ning 你已经有一个 @属性 AVAudioNode 中的 lastRenderTime - 你的播放器的 superclass ...
AVAudioFormat *outputFormat = [playerA outputFormatForBus:0];
const float kStartDelayTime = 0.0; // seconds - in case you wanna delay the start
AVAudioFramePosition startSampleTime = playerA.lastRenderTime.sampleTime;
AVAudioTime *startTime = [AVAudioTime timeWithSampleTime:(startSampleTime + (kStartDelayTime * outputFormat.sampleRate)) atRate:outputFormat.sampleRate];
[playerA playAtTime: startTime];
[playerB playAtTime: startTime];
[playerC playAtTime: startTime];
[playerD playAtTime: startTime];
[player...
顺便说一句 - 您可以使用 AVAudioPlayer 获得相同的 100% 采样帧准确结果 class...
NSTimeInterval startDelayTime = 0.0; // seconds - in case you wanna delay the start
NSTimeInterval now = playerA.deviceCurrentTime;
NSTimeInterval startTime = now + startDelayTime;
[playerA playAtTime: startTime];
[playerB playAtTime: startTime];
[playerC playAtTime: startTime];
[playerD playAtTime: startTime];
[player...
在没有 startDelayTime 的情况下,所有玩家的前 100-200 毫秒将被截断,因为尽管玩家已经开始(嗯,已安排),但开始命令实际上将其时间用于 运行 循环 100% 现在同步。但是有了 startDelayTime = 0.25 你就可以开始了。永远不要忘记提前 prepareToPlay 你的玩家,这样在开始时就不需要做额外的缓冲或设置 - 只是让他们开始 ;-)
如需更深入的解释,请查看我在
中的回答AVAudioEngine multiple AVAudioInputNodes do not play in perfect sync