如何将 CMSampleBufferRef/CIImage/UIImage 转换为像素,例如uint8_t[]
How to convert CMSampleBufferRef/CIImage/UIImage into pixels e.g. uint8_t[]
我从捕获的相机帧中输入了 CMSampleBufferRef
,我需要最好以 C 类型 uint8_t[]
获取原始像素。
我还需要找到输入图像的配色方案。
我知道如何使用 png 格式将 CMSampleBufferRef
转换为 UIImage
,然后再转换为 NSData
,但我不知道t know how to get the raw pixels from there. Perhaps I could get it already from
CMSampleBufferRef/CIImage`?
此代码显示了需要和缺失的位。
有什么想法从哪里开始吗?
int convertCMSampleBufferToPixelArray (CMSampleBufferRef sampleBuffer)
{
// inputs
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CIContext *imgContext = [CIContext new];
CGImageRef cgImage = [imgContext createCGImage:ciImage fromRect:ciImage.extent];
UIImage *uiImage = [UIImage imageWithCGImage:cgImage];
NSData *nsData = UIImagePNGRepresentation(uiImage);
// Need to fill this gap
uint8_t* data = XXXXXXXXXXXXXXXX;
ImageFormat format = XXXXXXXXXXXXXXXX; // one of: GRAY8, RGB_888, YV12, BGRA_8888, ARGB_8888
// sample showing expected data values
// this routine converts the image data to gray
//
int width = uiImage.size.width;
int height = uiImage.size.height;
const int size = width * height;
std::unique_ptr<uint8_t[]> new_data(new uint8_t[size]);
for (int i = 0; i < size; ++i) {
new_data[i] = uint8_t(data[i * 3] * 0.299f + data[i * 3 + 1] * 0.587f +
data[i * 3 + 2] * 0.114f + 0.5f);
}
return 1;
}
可用于搜索更多信息的一些提示。它有很好的记录,你不应该有问题。
int convertCMSampleBufferToPixelArray (CMSampleBufferRef sampleBuffer) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (imageBuffer == NULL) {
return -1;
}
// Get address of the image buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
uint8_t* data = CVPixelBufferGetBaseAddress(imageBuffer);
// Get size
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Get bytes per row
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// At `data` you have a bytesPerRow * height bytes of the image data
// To get pixel info you can call CVPixelBufferGetPixelFormatType, ...
// you can call CVImageBufferGetColorSpace and inspect it, ...
// When you're done, unlock the base address
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return 0;
}
您应该注意几件事。
第一个是可以be planar. Check the CVPixelBufferIsPlanar
, CVPixelBufferGetPlaneCount
, CVPixelBufferGetBytesPerRowOfPlane
,等等
第二个是你必须根据 CVPixelBufferGetPixelFormatType
计算像素大小。类似于:
CVPixelBufferGetPixelFormatType(imageBuffer)
size_t pixelSize;
switch (pixelFormat) {
case kCVPixelFormatType_32BGRA:
case kCVPixelFormatType_32ARGB:
case kCVPixelFormatType_32ABGR:
case kCVPixelFormatType_32RGBA:
pixelSize = 4;
break;
// + other cases
}
假设缓冲区不是平面的并且:
CVPixelBufferGetWidth
returns 200(像素)
- 您的 pixelSize 是 4(每行计算的字节数是 200 * 4 = 800)
CVPixelBufferGetBytesPerRow
可以 return 任何东西 >= 800
换句话说,您拥有的指针不是指向连续缓冲区的指针。如果你需要行数据,你必须这样做:
uint8_t* data = CVPixelBufferGetBaseAddress(imageBuffer);
// Get size
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t pixelSize = 4; // Let's pretend it's calculated pixel size
size_t realRowSize = width * pixelSize;
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
for (int row = 0 ; row < height ; row++) {
// bytesPerRow acts like an offset where the next row starts
// bytesPerRow can be >= realRowSize
uint8_t *rowData = data + row * bytesPerRow;
// realRowSize = how many bytes are available for this row
// copy them somewhere
}
如果您想拥有连续的缓冲区,则必须分配一个缓冲区并将这些行数据复制到那里。要分配多少字节? CVPixelBufferGetDataSize
.
我从捕获的相机帧中输入了 CMSampleBufferRef
,我需要最好以 C 类型 uint8_t[]
获取原始像素。
我还需要找到输入图像的配色方案。
我知道如何使用 png 格式将 CMSampleBufferRef
转换为 UIImage
,然后再转换为 NSData
,但我不知道t know how to get the raw pixels from there. Perhaps I could get it already from
CMSampleBufferRef/CIImage`?
此代码显示了需要和缺失的位。 有什么想法从哪里开始吗?
int convertCMSampleBufferToPixelArray (CMSampleBufferRef sampleBuffer)
{
// inputs
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CIContext *imgContext = [CIContext new];
CGImageRef cgImage = [imgContext createCGImage:ciImage fromRect:ciImage.extent];
UIImage *uiImage = [UIImage imageWithCGImage:cgImage];
NSData *nsData = UIImagePNGRepresentation(uiImage);
// Need to fill this gap
uint8_t* data = XXXXXXXXXXXXXXXX;
ImageFormat format = XXXXXXXXXXXXXXXX; // one of: GRAY8, RGB_888, YV12, BGRA_8888, ARGB_8888
// sample showing expected data values
// this routine converts the image data to gray
//
int width = uiImage.size.width;
int height = uiImage.size.height;
const int size = width * height;
std::unique_ptr<uint8_t[]> new_data(new uint8_t[size]);
for (int i = 0; i < size; ++i) {
new_data[i] = uint8_t(data[i * 3] * 0.299f + data[i * 3 + 1] * 0.587f +
data[i * 3 + 2] * 0.114f + 0.5f);
}
return 1;
}
可用于搜索更多信息的一些提示。它有很好的记录,你不应该有问题。
int convertCMSampleBufferToPixelArray (CMSampleBufferRef sampleBuffer) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (imageBuffer == NULL) {
return -1;
}
// Get address of the image buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
uint8_t* data = CVPixelBufferGetBaseAddress(imageBuffer);
// Get size
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Get bytes per row
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// At `data` you have a bytesPerRow * height bytes of the image data
// To get pixel info you can call CVPixelBufferGetPixelFormatType, ...
// you can call CVImageBufferGetColorSpace and inspect it, ...
// When you're done, unlock the base address
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return 0;
}
您应该注意几件事。
第一个是可以be planar. Check the CVPixelBufferIsPlanar
, CVPixelBufferGetPlaneCount
, CVPixelBufferGetBytesPerRowOfPlane
,等等
第二个是你必须根据 CVPixelBufferGetPixelFormatType
计算像素大小。类似于:
CVPixelBufferGetPixelFormatType(imageBuffer)
size_t pixelSize;
switch (pixelFormat) {
case kCVPixelFormatType_32BGRA:
case kCVPixelFormatType_32ARGB:
case kCVPixelFormatType_32ABGR:
case kCVPixelFormatType_32RGBA:
pixelSize = 4;
break;
// + other cases
}
假设缓冲区不是平面的并且:
CVPixelBufferGetWidth
returns 200(像素)- 您的 pixelSize 是 4(每行计算的字节数是 200 * 4 = 800)
CVPixelBufferGetBytesPerRow
可以 return 任何东西>= 800
换句话说,您拥有的指针不是指向连续缓冲区的指针。如果你需要行数据,你必须这样做:
uint8_t* data = CVPixelBufferGetBaseAddress(imageBuffer);
// Get size
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t pixelSize = 4; // Let's pretend it's calculated pixel size
size_t realRowSize = width * pixelSize;
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
for (int row = 0 ; row < height ; row++) {
// bytesPerRow acts like an offset where the next row starts
// bytesPerRow can be >= realRowSize
uint8_t *rowData = data + row * bytesPerRow;
// realRowSize = how many bytes are available for this row
// copy them somewhere
}
如果您想拥有连续的缓冲区,则必须分配一个缓冲区并将这些行数据复制到那里。要分配多少字节? CVPixelBufferGetDataSize
.