draw/edit Swift 中 iOS 中的 CVPixelBuffer 的正确方法
Correct way to draw/edit a CVPixelBuffer in Swift in iOS
在 swift 中的 CVImageBuffer/CVPixelBuffer 上是否有标准的高效方法 edit/draw?
我在网上找到的所有视频编辑演示都是在屏幕上叠加绘图(矩形或文本),而不是直接编辑 CVPixelBuffer。
更新 我尝试使用 CGContext 但保存的视频不显示上下文绘图
private var adapter: AVAssetWriterInputPixelBufferAdaptor?
extension TrainViewController: CameraFeedManagerDelegate {
func didOutput(sampleBuffer: CMSampleBuffer) {
let time = CMTime(seconds: timestamp - _time, preferredTimescale: CMTimeScale(600))
let pixelBuffer: CVPixelBuffer? = CMSampleBufferGetImageBuffer(sampleBuffer)
guard let context = CGContext(data: CVPixelBufferGetBaseAddress(pixelBuffer),
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer),
space: colorSpace,
bitmapInfo: alphaInfo.rawValue)
else {
return nil
}
context.setFillColor(red: 1, green: 0, blue: 0, alpha: 1.0)
context.fillEllipse(in: CGRect(x: 0, y: 0, width: width, height: height))
context.flush()
adapter?.append(pixelBuffer, withPresentationTime: time)
}
}
您需要在创建位图之前调用 CVPixelBufferLockBaseAddress(pixelBuffer, 0)
CGContext
并在绘制到上下文之后调用 CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)
。
不锁定像素缓冲区,CVPixelBufferGetBaseAddress()
returns NULL。这会导致您的 CGContext
分配新的内存来绘制,随后将被丢弃。
还要仔细检查你的颜色 space。很容易混淆你的组件。
例如
guard
CVPixelBufferLockBaseAddress(pixelBuffer) == kCVReturnSuccess,
let context = CGContext(data: CVPixelBufferGetBaseAddress(pixelBuffer),
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer),
space: colorSpace,
bitmapInfo: alphaInfo.rawValue)
else {
return nil
}
context.setFillColor(red: 1, green: 0, blue: 0, alpha: 1.0)
context.fillEllipse(in: CGRect(x: 0, y: 0, width: width, height: height))
CVPixelBufferUnlockBaseAddress(pixelBuffer)
adapter?.append(pixelBuffer, withPresentationTime: time)
在 swift 中的 CVImageBuffer/CVPixelBuffer 上是否有标准的高效方法 edit/draw?
我在网上找到的所有视频编辑演示都是在屏幕上叠加绘图(矩形或文本),而不是直接编辑 CVPixelBuffer。
更新 我尝试使用 CGContext 但保存的视频不显示上下文绘图
private var adapter: AVAssetWriterInputPixelBufferAdaptor?
extension TrainViewController: CameraFeedManagerDelegate {
func didOutput(sampleBuffer: CMSampleBuffer) {
let time = CMTime(seconds: timestamp - _time, preferredTimescale: CMTimeScale(600))
let pixelBuffer: CVPixelBuffer? = CMSampleBufferGetImageBuffer(sampleBuffer)
guard let context = CGContext(data: CVPixelBufferGetBaseAddress(pixelBuffer),
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer),
space: colorSpace,
bitmapInfo: alphaInfo.rawValue)
else {
return nil
}
context.setFillColor(red: 1, green: 0, blue: 0, alpha: 1.0)
context.fillEllipse(in: CGRect(x: 0, y: 0, width: width, height: height))
context.flush()
adapter?.append(pixelBuffer, withPresentationTime: time)
}
}
您需要在创建位图之前调用 CVPixelBufferLockBaseAddress(pixelBuffer, 0)
CGContext
并在绘制到上下文之后调用 CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)
。
不锁定像素缓冲区,CVPixelBufferGetBaseAddress()
returns NULL。这会导致您的 CGContext
分配新的内存来绘制,随后将被丢弃。
还要仔细检查你的颜色 space。很容易混淆你的组件。
例如
guard
CVPixelBufferLockBaseAddress(pixelBuffer) == kCVReturnSuccess,
let context = CGContext(data: CVPixelBufferGetBaseAddress(pixelBuffer),
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer),
space: colorSpace,
bitmapInfo: alphaInfo.rawValue)
else {
return nil
}
context.setFillColor(red: 1, green: 0, blue: 0, alpha: 1.0)
context.fillEllipse(in: CGRect(x: 0, y: 0, width: width, height: height))
CVPixelBufferUnlockBaseAddress(pixelBuffer)
adapter?.append(pixelBuffer, withPresentationTime: time)