Swift header 中的 AVCaptureAudioDataOutput 是否缺少 audioSettings 属性?
Is audioSettings property missing in AVCaptureAudioDataOutput in the Swift header?
我正在开发 iOS 应用程序,我想在其中录制分段视频。我已阅读 https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/00_Introduction.html 并且我有一个使用 AVCaptureVideoDataOutput
的工作解决方案,我在其中捕获帧并使用 AVAssetWriter
将它们写入文件。我像这样将 AVCaptureVideoDataOutput
添加到 AVCaptureSession
:
// Setup videoDataOutput in order to capture samplebuffers
let videoDataOutput = AVCaptureVideoDataOutput()
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable : Int(kCVPixelFormatType_32BGRA)]
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.setSampleBufferDelegate(self, queue: CaptureManager.CAPTURE_QUEUE)
guard captureSession.canAddOutput(videoDataOutput) else {
return
}
captureSession.addOutput(videoDataOutput)
self.videoDataOutput = videoDataOutput
效果很好,我可以 运行 成功捕获 session 并获得可玩的 movie-file。
现在我想插入音频。所以我想做同样的事情:
// Setup audioDataOutput in order to capture audio
let audioDataOutput = AVCaptureAudioDataOutput()
audioDataOutput.audioSettings = ...
audioDataOutput.setSampleBufferDelegate(self, queue: CaptureManager.CAPTURE_QUEUE)
guard captureSession.canAddOutput(audioDataOutput) else {
return
}
captureSession.addOutput(audioDataOutput)
self.audioDataOutput = audioDataOutput
疯狂的是AVCaptureAudioDataOutput
上没有属性audioSettings
!!!文档是这样说的:https://developer.apple.com/reference/avfoundation/avcaptureaudiodataoutput/1388527-audiosettings 但是 Swift header 没有这样的成员(如下)。
这到底是怎么回事?我正在使用 XCode 8.1。 Swift header 为 class AVCaptureAudioDataOutput
以下:
import AVFoundation
import CoreMedia
import Foundation
/*!
@class AVCaptureAudioDataOutput
@abstract
AVCaptureAudioDataOutput is a concrete subclass of AVCaptureOutput that can be used to process uncompressed or compressed samples from the audio being captured.
@discussion
Instances of AVCaptureAudioDataOutput produce audio sample buffers suitable for processing using other media APIs. Applications can access the sample buffers with the captureOutput:didOutputSampleBuffer:fromConnection: delegate method.
*/
@available(iOS 4.0, *)
open class AVCaptureAudioDataOutput : AVCaptureOutput {
/*!
@method setSampleBufferDelegate:queue:
@abstract
Sets the receiver's delegate that will accept captured buffers and dispatch queue on which the delegate will be called.
@param sampleBufferDelegate
An object conforming to the AVCaptureAudioDataOutputSampleBufferDelegate protocol that will receive sample buffers after they are captured.
@param sampleBufferCallbackQueue
A dispatch queue on which all sample buffer delegate methods will be called.
@discussion
When a new audio sample buffer is captured it will be vended to the sample buffer delegate using the captureOutput:didOutputSampleBuffer:fromConnection: delegate method. All delegate methods will be called on the specified dispatch queue. If the queue is blocked when new samples are captured, those samples will be automatically dropped when they become sufficiently late. This allows clients to process existing samples on the same queue without having to manage the potential memory usage increases that would otherwise occur when that processing is unable to keep up with the rate of incoming samples.
Clients that need to minimize the chances of samples being dropped should specify a queue on which a sufficiently small amount of processing is being done outside of receiving sample buffers. However, if such clients migrate extra processing to another queue, they are responsible for ensuring that memory usage does not grow without bound from samples that have not been processed.
A serial dispatch queue must be used to guarantee that audio samples will be delivered in order. The sampleBufferCallbackQueue parameter may not be NULL, except when setting sampleBufferDelegate to nil.
*/
open func setSampleBufferDelegate(_ sampleBufferDelegate: AVCaptureAudioDataOutputSampleBufferDelegate!, queue sampleBufferCallbackQueue: DispatchQueue!)
/*!
@property sampleBufferDelegate
@abstract
The receiver's delegate.
@discussion
The value of this property is an object conforming to the AVCaptureAudioDataOutputSampleBufferDelegate protocol that will receive sample buffers after they are captured. The delegate is set using the setSampleBufferDelegate:queue: method.
*/
open var sampleBufferDelegate: AVCaptureAudioDataOutputSampleBufferDelegate! { get }
/*!
@property sampleBufferCallbackQueue
@abstract
The dispatch queue on which all sample buffer delegate methods will be called.
@discussion
The value of this property is a dispatch_queue_t. The queue is set using the setSampleBufferDelegate:queue: method.
*/
open var sampleBufferCallbackQueue: DispatchQueue! { get }
/*!
@property audioSettings
@abstract
Specifies the settings used to decode or re-encode audio before it is output by the receiver.
@discussion
The value of this property is an NSDictionary containing values for audio settings keys defined in AVAudioSettings.h. When audioSettings is set to nil, the AVCaptureAudioDataOutput vends samples in their device native format.
*/
// (TARGET_OS_MAC && !(TARGET_OS_EMBEDDED || TARGET_OS_IPHONE))
/*!
@method recommendedAudioSettingsForAssetWriterWithOutputFileType:
@abstract
Specifies the recommended settings for use with an AVAssetWriterInput.
@param outputFileType
Specifies the UTI of the file type to be written (see AVMediaFormat.h for a list of file format UTIs).
@return
A fully populated dictionary of keys and values that are compatible with AVAssetWriter.
@discussion
The value of this property is an NSDictionary containing values for compression settings keys defined in AVAudioSettings.h. This dictionary is suitable for use as the "outputSettings" parameter when creating an AVAssetWriterInput, such as,
[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:outputSettings sourceFormatHint:hint];
The dictionary returned contains all necessary keys and values needed by AVAssetWriter (see AVAssetWriterInput.h, -initWithMediaType:outputSettings: for a more in depth discussion). For QuickTime movie and ISO files, the recommended audio settings will always produce output comparable to that of AVCaptureMovieFileOutput.
Note that the dictionary of settings is dependent on the current configuration of the receiver's AVCaptureSession and its inputs. The settings dictionary may change if the session's configuration changes. As such, you should configure your session first, then query the recommended audio settings.
*/
@available(iOS 7.0, *)
open func recommendedAudioSettingsForAssetWriter(withOutputFileType outputFileType: String!) -> [AnyHashable : Any]!
}
/*!
@protocol AVCaptureAudioDataOutputSampleBufferDelegate
@abstract
Defines an interface for delegates of AVCaptureAudioDataOutput to receive captured audio sample buffers.
*/
public protocol AVCaptureAudioDataOutputSampleBufferDelegate : NSObjectProtocol {
/*!
@method captureOutput:didOutputSampleBuffer:fromConnection:
@abstract
Called whenever an AVCaptureAudioDataOutput instance outputs a new audio sample buffer.
@param captureOutput
The AVCaptureAudioDataOutput instance that output the samples.
@param sampleBuffer
A CMSampleBuffer object containing the audio samples and additional information about them, such as their format and presentation time.
@param connection
The AVCaptureConnection from which the audio was received.
@discussion
Delegates receive this message whenever the output captures and outputs new audio samples, decoding or re-encoding as specified by the audioSettings property. Delegates can use the provided sample buffer in conjunction with other APIs for further processing. This method will be called on the dispatch queue specified by the output's sampleBufferCallbackQueue property. This method is called periodically, so it must be efficient to prevent capture performance problems, including dropped audio samples.
Clients that need to reference the CMSampleBuffer object outside of the scope of this method must CFRetain it and then CFRelease it when they are finished with it.
*/
@available(iOS 4.0, *)
optional public func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
}
AVCaptureAudioDataOutput.audioSettings
仅适用于 osx。您也许可以使用 AVAudioSession
修改采样率,但否则您将不得不安排您想要进行的任何转换。
有很多方法可以做到这一点,但是 AVAssetWriterInput.init(mediaType:, outputSettings:)
的 outputSettings
似乎是一个不错的起点。
我正在开发 iOS 应用程序,我想在其中录制分段视频。我已阅读 https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/00_Introduction.html 并且我有一个使用 AVCaptureVideoDataOutput
的工作解决方案,我在其中捕获帧并使用 AVAssetWriter
将它们写入文件。我像这样将 AVCaptureVideoDataOutput
添加到 AVCaptureSession
:
// Setup videoDataOutput in order to capture samplebuffers
let videoDataOutput = AVCaptureVideoDataOutput()
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable : Int(kCVPixelFormatType_32BGRA)]
videoDataOutput.alwaysDiscardsLateVideoFrames = true
videoDataOutput.setSampleBufferDelegate(self, queue: CaptureManager.CAPTURE_QUEUE)
guard captureSession.canAddOutput(videoDataOutput) else {
return
}
captureSession.addOutput(videoDataOutput)
self.videoDataOutput = videoDataOutput
效果很好,我可以 运行 成功捕获 session 并获得可玩的 movie-file。
现在我想插入音频。所以我想做同样的事情:
// Setup audioDataOutput in order to capture audio
let audioDataOutput = AVCaptureAudioDataOutput()
audioDataOutput.audioSettings = ...
audioDataOutput.setSampleBufferDelegate(self, queue: CaptureManager.CAPTURE_QUEUE)
guard captureSession.canAddOutput(audioDataOutput) else {
return
}
captureSession.addOutput(audioDataOutput)
self.audioDataOutput = audioDataOutput
疯狂的是AVCaptureAudioDataOutput
上没有属性audioSettings
!!!文档是这样说的:https://developer.apple.com/reference/avfoundation/avcaptureaudiodataoutput/1388527-audiosettings 但是 Swift header 没有这样的成员(如下)。
这到底是怎么回事?我正在使用 XCode 8.1。 Swift header 为 class AVCaptureAudioDataOutput
以下:
import AVFoundation
import CoreMedia
import Foundation
/*!
@class AVCaptureAudioDataOutput
@abstract
AVCaptureAudioDataOutput is a concrete subclass of AVCaptureOutput that can be used to process uncompressed or compressed samples from the audio being captured.
@discussion
Instances of AVCaptureAudioDataOutput produce audio sample buffers suitable for processing using other media APIs. Applications can access the sample buffers with the captureOutput:didOutputSampleBuffer:fromConnection: delegate method.
*/
@available(iOS 4.0, *)
open class AVCaptureAudioDataOutput : AVCaptureOutput {
/*!
@method setSampleBufferDelegate:queue:
@abstract
Sets the receiver's delegate that will accept captured buffers and dispatch queue on which the delegate will be called.
@param sampleBufferDelegate
An object conforming to the AVCaptureAudioDataOutputSampleBufferDelegate protocol that will receive sample buffers after they are captured.
@param sampleBufferCallbackQueue
A dispatch queue on which all sample buffer delegate methods will be called.
@discussion
When a new audio sample buffer is captured it will be vended to the sample buffer delegate using the captureOutput:didOutputSampleBuffer:fromConnection: delegate method. All delegate methods will be called on the specified dispatch queue. If the queue is blocked when new samples are captured, those samples will be automatically dropped when they become sufficiently late. This allows clients to process existing samples on the same queue without having to manage the potential memory usage increases that would otherwise occur when that processing is unable to keep up with the rate of incoming samples.
Clients that need to minimize the chances of samples being dropped should specify a queue on which a sufficiently small amount of processing is being done outside of receiving sample buffers. However, if such clients migrate extra processing to another queue, they are responsible for ensuring that memory usage does not grow without bound from samples that have not been processed.
A serial dispatch queue must be used to guarantee that audio samples will be delivered in order. The sampleBufferCallbackQueue parameter may not be NULL, except when setting sampleBufferDelegate to nil.
*/
open func setSampleBufferDelegate(_ sampleBufferDelegate: AVCaptureAudioDataOutputSampleBufferDelegate!, queue sampleBufferCallbackQueue: DispatchQueue!)
/*!
@property sampleBufferDelegate
@abstract
The receiver's delegate.
@discussion
The value of this property is an object conforming to the AVCaptureAudioDataOutputSampleBufferDelegate protocol that will receive sample buffers after they are captured. The delegate is set using the setSampleBufferDelegate:queue: method.
*/
open var sampleBufferDelegate: AVCaptureAudioDataOutputSampleBufferDelegate! { get }
/*!
@property sampleBufferCallbackQueue
@abstract
The dispatch queue on which all sample buffer delegate methods will be called.
@discussion
The value of this property is a dispatch_queue_t. The queue is set using the setSampleBufferDelegate:queue: method.
*/
open var sampleBufferCallbackQueue: DispatchQueue! { get }
/*!
@property audioSettings
@abstract
Specifies the settings used to decode or re-encode audio before it is output by the receiver.
@discussion
The value of this property is an NSDictionary containing values for audio settings keys defined in AVAudioSettings.h. When audioSettings is set to nil, the AVCaptureAudioDataOutput vends samples in their device native format.
*/
// (TARGET_OS_MAC && !(TARGET_OS_EMBEDDED || TARGET_OS_IPHONE))
/*!
@method recommendedAudioSettingsForAssetWriterWithOutputFileType:
@abstract
Specifies the recommended settings for use with an AVAssetWriterInput.
@param outputFileType
Specifies the UTI of the file type to be written (see AVMediaFormat.h for a list of file format UTIs).
@return
A fully populated dictionary of keys and values that are compatible with AVAssetWriter.
@discussion
The value of this property is an NSDictionary containing values for compression settings keys defined in AVAudioSettings.h. This dictionary is suitable for use as the "outputSettings" parameter when creating an AVAssetWriterInput, such as,
[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:outputSettings sourceFormatHint:hint];
The dictionary returned contains all necessary keys and values needed by AVAssetWriter (see AVAssetWriterInput.h, -initWithMediaType:outputSettings: for a more in depth discussion). For QuickTime movie and ISO files, the recommended audio settings will always produce output comparable to that of AVCaptureMovieFileOutput.
Note that the dictionary of settings is dependent on the current configuration of the receiver's AVCaptureSession and its inputs. The settings dictionary may change if the session's configuration changes. As such, you should configure your session first, then query the recommended audio settings.
*/
@available(iOS 7.0, *)
open func recommendedAudioSettingsForAssetWriter(withOutputFileType outputFileType: String!) -> [AnyHashable : Any]!
}
/*!
@protocol AVCaptureAudioDataOutputSampleBufferDelegate
@abstract
Defines an interface for delegates of AVCaptureAudioDataOutput to receive captured audio sample buffers.
*/
public protocol AVCaptureAudioDataOutputSampleBufferDelegate : NSObjectProtocol {
/*!
@method captureOutput:didOutputSampleBuffer:fromConnection:
@abstract
Called whenever an AVCaptureAudioDataOutput instance outputs a new audio sample buffer.
@param captureOutput
The AVCaptureAudioDataOutput instance that output the samples.
@param sampleBuffer
A CMSampleBuffer object containing the audio samples and additional information about them, such as their format and presentation time.
@param connection
The AVCaptureConnection from which the audio was received.
@discussion
Delegates receive this message whenever the output captures and outputs new audio samples, decoding or re-encoding as specified by the audioSettings property. Delegates can use the provided sample buffer in conjunction with other APIs for further processing. This method will be called on the dispatch queue specified by the output's sampleBufferCallbackQueue property. This method is called periodically, so it must be efficient to prevent capture performance problems, including dropped audio samples.
Clients that need to reference the CMSampleBuffer object outside of the scope of this method must CFRetain it and then CFRelease it when they are finished with it.
*/
@available(iOS 4.0, *)
optional public func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
}
AVCaptureAudioDataOutput.audioSettings
仅适用于 osx。您也许可以使用 AVAudioSession
修改采样率,但否则您将不得不安排您想要进行的任何转换。
有很多方法可以做到这一点,但是 AVAssetWriterInput.init(mediaType:, outputSettings:)
的 outputSettings
似乎是一个不错的起点。