将 CVImageBuffer 转换为 YUV420 对象
Converting CVImageBuffer to YUV420 object
我想保留来自相机的流式视频的 YUV420 格式以避免转换为灰度时的惩罚,但我也想保留颜色分量。最终目标是使用像 OpenCV 这样的计算机视觉库进行处理。虽然我最终可能会选择 BGRA,但我仍然希望能够使用 YUV 测试一个可行的解决方案。那么如何将具有像素格式 kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
的 CVImageBuffer 转换为单个内存块?
被拒绝的解决方案:
- CIImage 超级方便,但不允许渲染为 YUV 格式的位图。
- cv::Mat 用 C++ 污染你的 Obj-C 代码
AVCaptureSessionDelegate
这会将数据填充到包含字节的 NSObject 中,具体取决于指定的像素格式。我继续为 BGRA 或 YUV 像素格式提供检测和 malloc 内存的能力。所以这个解决方案非常适合测试两者。
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef videoImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(videoImageBuffer, 0);
void *baseAddress = NULL;
NSUInteger totalBytes = 0;
size_t width = CVPixelBufferGetWidth(videoImageBuffer);
size_t height = 0;
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(videoImageBuffer);
OSType pixelFormat = CVPixelBufferGetPixelFormatType(videoImageBuffer);
if (pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange ||
pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
size_t planeCount = CVPixelBufferGetPlaneCount(videoImageBuffer);
baseAddress = CVPixelBufferGetBaseAddressOfPlane(videoImageBuffer, 0);
for (int plane = 0; plane < planeCount; plane++) {
size_t planeHeight = CVPixelBufferGetHeightOfPlane(videoImageBuffer, plane);
size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(videoImageBuffer, plane);
height += planeHeight;
totalBytes += (int)planeHeight * (int)bytesPerRow;
}
} else if (pixelFormat == kCVPixelFormatType_32BGRA) {
baseAddress = CVPixelBufferGetBaseAddress(videoImageBuffer);
height = CVPixelBufferGetHeight(videoImageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(videoImageBuffer);
totalBytes += (int)height * (int)bytesPerRow;
}
// Doesn't have to be an NSData object
NSData *rawPixelData = [NSData dataWithBytes:baseAddress length:totalBytes];
// Just a plain-ol-NSObject with the following properties
NTNUVideoFrame *videoFrame = [[NTNUVideoFrame alloc] init];
videoFrame.width = width;
videoFrame.height = height;
videoFrame.bytesPerRow = bytesPerRow;
videoFrame.pixelFormat = pixelFormat;
// Alternatively if you switch rawPixelData to void *
// videoFrame.rawPixelData = baseAddress;
videoFrame.rawPixelData = rawPixelData;
[self.delegate didUpdateVideoFrame:videoFrame];
CVPixelBufferUnlockBaseAddress(videoImageBuffer, 0);
}
你唯一需要记住的是,如果你打算切换线程,你将需要 malloc
和 memcpy
基地址,或者 dispatch_async 而你不需要使用 NSData
。一旦解锁基地址,像素数据将不再有效。
void *rawPixelData = malloc(totalBytes);
memcpy(rawPixelData, baseAddress, totalBytes);
此时你需要考虑完成后对该内存块调用 free 的问题。
我想保留来自相机的流式视频的 YUV420 格式以避免转换为灰度时的惩罚,但我也想保留颜色分量。最终目标是使用像 OpenCV 这样的计算机视觉库进行处理。虽然我最终可能会选择 BGRA,但我仍然希望能够使用 YUV 测试一个可行的解决方案。那么如何将具有像素格式 kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
的 CVImageBuffer 转换为单个内存块?
被拒绝的解决方案:
- CIImage 超级方便,但不允许渲染为 YUV 格式的位图。
- cv::Mat 用 C++ 污染你的 Obj-C 代码
AVCaptureSessionDelegate
这会将数据填充到包含字节的 NSObject 中,具体取决于指定的像素格式。我继续为 BGRA 或 YUV 像素格式提供检测和 malloc 内存的能力。所以这个解决方案非常适合测试两者。
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef videoImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(videoImageBuffer, 0);
void *baseAddress = NULL;
NSUInteger totalBytes = 0;
size_t width = CVPixelBufferGetWidth(videoImageBuffer);
size_t height = 0;
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(videoImageBuffer);
OSType pixelFormat = CVPixelBufferGetPixelFormatType(videoImageBuffer);
if (pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange ||
pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
size_t planeCount = CVPixelBufferGetPlaneCount(videoImageBuffer);
baseAddress = CVPixelBufferGetBaseAddressOfPlane(videoImageBuffer, 0);
for (int plane = 0; plane < planeCount; plane++) {
size_t planeHeight = CVPixelBufferGetHeightOfPlane(videoImageBuffer, plane);
size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(videoImageBuffer, plane);
height += planeHeight;
totalBytes += (int)planeHeight * (int)bytesPerRow;
}
} else if (pixelFormat == kCVPixelFormatType_32BGRA) {
baseAddress = CVPixelBufferGetBaseAddress(videoImageBuffer);
height = CVPixelBufferGetHeight(videoImageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(videoImageBuffer);
totalBytes += (int)height * (int)bytesPerRow;
}
// Doesn't have to be an NSData object
NSData *rawPixelData = [NSData dataWithBytes:baseAddress length:totalBytes];
// Just a plain-ol-NSObject with the following properties
NTNUVideoFrame *videoFrame = [[NTNUVideoFrame alloc] init];
videoFrame.width = width;
videoFrame.height = height;
videoFrame.bytesPerRow = bytesPerRow;
videoFrame.pixelFormat = pixelFormat;
// Alternatively if you switch rawPixelData to void *
// videoFrame.rawPixelData = baseAddress;
videoFrame.rawPixelData = rawPixelData;
[self.delegate didUpdateVideoFrame:videoFrame];
CVPixelBufferUnlockBaseAddress(videoImageBuffer, 0);
}
你唯一需要记住的是,如果你打算切换线程,你将需要 malloc
和 memcpy
基地址,或者 dispatch_async 而你不需要使用 NSData
。一旦解锁基地址,像素数据将不再有效。
void *rawPixelData = malloc(totalBytes);
memcpy(rawPixelData, baseAddress, totalBytes);
此时你需要考虑完成后对该内存块调用 free 的问题。