MTKView 以比 AVCaptureVideoPreviewLayer 更低的分辨率显示相机提要
MTKView displaying camera feed with lower resolution than AVCaptureVideoPreviewLayer
我正在尝试将摄像头源流式传输到 MTKView,以便将一些 CI 过滤器应用于实时流。
初始化捕获会话并布置 MTKView 后,这是我设置金属的方式(金属视图是 MTKview):
func setupMetal(){
metalDevice = MTLCreateSystemDefaultDevice()
metalView.device = metalDevice
// Write when asked
metalView.isPaused = true
metalView.enableSetNeedsDisplay = false
// Command queue for the GPU
metalCommandQueue = metalDevice.makeCommandQueue()
// Assign the delegate
metalView.delegate = self
// ???
metalView.framebufferOnly = false
}
然后我从 SampleBufferDelegate 抓取帧并得到 CIImage
extension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//try and get a CVImageBuffer out of the sample buffer
guard let cvBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
//get a CIImage out of the CVImageBuffer
let ciImage = CIImage(cvImageBuffer: cvBuffer)
self.currentCIImage = ciImage
// We draw to the metal view everytime we receive a frame
metalView.draw()
}}
然后我使用当前CI图像使用它的委托方法在 MTKView 中绘制:
extension ViewController : MTKViewDelegate {
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
//tells us the drawable's size has changed
}
func draw(in view: MTKView) {
//create command buffer for ciContext to use to encode it's rendering instructions to the GPU
guard let commandBuffer = metalCommandQueue.makeCommandBuffer() else {
return
}
//make sure we actually have a ciImage to work with
guard let ciImage = currentCIImage else {
return
}
//make sure the current drawable object for this metal view is available (it's not in use by the previous draw cycle)
guard let currentDrawable = view.currentDrawable else {
return
}
//render into the metal texture
// Check here if we find a more elegant solution for the bounds
self.ciContext.render(ciImage,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: CGRect(origin: .zero, size: view.drawableSize),
colorSpace: CGColorSpaceCreateDeviceRGB())
//register where to draw the instructions in the command buffer once it executes
commandBuffer.present(currentDrawable)
//commit the command to the queue so it executes
commandBuffer.commit()
}
}
它工作正常,我能够从 MTKView 中渲染的相机获取帧。但是,我注意到我没有获得完整的分辨率,不知何故图像在 MTKview 中被放大了。我知道这与我如何设置捕获会话无关,因为当我使用标准 AVCapturePreviewLayer 时一切正常。对我做错了什么有什么想法吗?
提前致谢!
P.S 这段代码主要基于这个优秀的教程:https://betterprogramming.pub/using-cifilters-metal-to-make-a-custom-camera-in-ios-c76134993316 但不知何故它似乎对我不起作用。
根据捕获会话的设置,相机帧的大小将与您的 MTKView
不同。这意味着您需要在渲染之前缩放和平移它们以匹配 currentDrawable
的大小。我为此使用了以下代码(在 draw
内,就在 render
调用之前):
// scale to fit into view
let drawableSize = self.drawableSize
let scaleX = drawableSize.width / input.extent.width
let scaleY = drawableSize.height / input.extent.height
let scale = min(scaleX, scaleY)
let scaledImage = input.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
// center in the view
let originX = max(drawableSize.width - scaledImage.extent.size.width, 0) / 2
let originY = max(drawableSize.height - scaledImage.extent.size.height, 0) / 2
let centeredImage = scaledImage.transformed(by: CGAffineTransform(translationX: originX, y: originY))
我正在尝试将摄像头源流式传输到 MTKView,以便将一些 CI 过滤器应用于实时流。 初始化捕获会话并布置 MTKView 后,这是我设置金属的方式(金属视图是 MTKview):
func setupMetal(){
metalDevice = MTLCreateSystemDefaultDevice()
metalView.device = metalDevice
// Write when asked
metalView.isPaused = true
metalView.enableSetNeedsDisplay = false
// Command queue for the GPU
metalCommandQueue = metalDevice.makeCommandQueue()
// Assign the delegate
metalView.delegate = self
// ???
metalView.framebufferOnly = false
}
然后我从 SampleBufferDelegate 抓取帧并得到 CIImage
extension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//try and get a CVImageBuffer out of the sample buffer
guard let cvBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
//get a CIImage out of the CVImageBuffer
let ciImage = CIImage(cvImageBuffer: cvBuffer)
self.currentCIImage = ciImage
// We draw to the metal view everytime we receive a frame
metalView.draw()
}}
然后我使用当前CI图像使用它的委托方法在 MTKView 中绘制:
extension ViewController : MTKViewDelegate {
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
//tells us the drawable's size has changed
}
func draw(in view: MTKView) {
//create command buffer for ciContext to use to encode it's rendering instructions to the GPU
guard let commandBuffer = metalCommandQueue.makeCommandBuffer() else {
return
}
//make sure we actually have a ciImage to work with
guard let ciImage = currentCIImage else {
return
}
//make sure the current drawable object for this metal view is available (it's not in use by the previous draw cycle)
guard let currentDrawable = view.currentDrawable else {
return
}
//render into the metal texture
// Check here if we find a more elegant solution for the bounds
self.ciContext.render(ciImage,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: CGRect(origin: .zero, size: view.drawableSize),
colorSpace: CGColorSpaceCreateDeviceRGB())
//register where to draw the instructions in the command buffer once it executes
commandBuffer.present(currentDrawable)
//commit the command to the queue so it executes
commandBuffer.commit()
}
}
它工作正常,我能够从 MTKView 中渲染的相机获取帧。但是,我注意到我没有获得完整的分辨率,不知何故图像在 MTKview 中被放大了。我知道这与我如何设置捕获会话无关,因为当我使用标准 AVCapturePreviewLayer 时一切正常。对我做错了什么有什么想法吗?
提前致谢!
P.S 这段代码主要基于这个优秀的教程:https://betterprogramming.pub/using-cifilters-metal-to-make-a-custom-camera-in-ios-c76134993316 但不知何故它似乎对我不起作用。
根据捕获会话的设置,相机帧的大小将与您的 MTKView
不同。这意味着您需要在渲染之前缩放和平移它们以匹配 currentDrawable
的大小。我为此使用了以下代码(在 draw
内,就在 render
调用之前):
// scale to fit into view
let drawableSize = self.drawableSize
let scaleX = drawableSize.width / input.extent.width
let scaleY = drawableSize.height / input.extent.height
let scale = min(scaleX, scaleY)
let scaledImage = input.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
// center in the view
let originX = max(drawableSize.width - scaledImage.extent.size.width, 0) / 2
let originY = max(drawableSize.height - scaledImage.extent.size.height, 0) / 2
let centeredImage = scaledImage.transformed(by: CGAffineTransform(translationX: originX, y: originY))