iPhone 摄像头在纵向模式下产生的这些额外字节是什么?
What are these extra bytes coming from the iPhone camera in portrait mode?
当我从 - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
得到一帧时,我得到了以下数据:
- 每行字节数:1,472 长度:706,560 高度:480 宽度:360 格式:
BGRA
这是来自 iPhone 6 plus 的前置摄像头。
这没有意义,因为每行的字节数应该是 (width * channels)(在这种情况下,channels 是 4)。但是,它是 (width+8)*channels。这额外的 8 个字节来自哪里?
这是我的代码:
将输出附加到会话,我将方向设置为纵向
bool attachOutputToSession(AVCaptureSession *session, id cameraDelegate)
{
assert(cameraDelegate);
AVCaptureVideoDataOutput *m_videoOutput = [[AVCaptureVideoDataOutput alloc] init];
//create a queue for capturing frames
dispatch_queue_t captureQueue = dispatch_queue_create("captureQueue", DISPATCH_QUEUE_SERIAL);
//Use the AVCaptureVideoDataOutputSampleBufferDelegate capabilities of CameraDelegate:
[m_videoOutput setSampleBufferDelegate:cameraDelegate queue:captureQueue];
//setup the video outputs
m_videoOutput.alwaysDiscardsLateVideoFrames = YES;
NSNumber *framePixelFormat = [NSNumber numberWithInt:kCVPixelFormatType_32BGRA];//This crashes with 24RGB b/c that isn't supported on iPhone
m_videoOutput.videoSettings = [ NSDictionary dictionaryWithObject:framePixelFormat forKey:(id)kCVPixelBufferPixelFormatTypeKey];
//Check if it already has an output from a previous session
if ([session canAddOutput:m_videoOutput])
{
[session addOutput:m_videoOutput];
}
//set connection settings
for (AVCaptureConnection *connection in m_videoOutput.connections)
{
if (connection.isVideoMirroringSupported)
connection.videoMirrored = true;
if (connection.isVideoOrientationSupported)
connection.videoOrientation = AVCaptureVideoOrientationPortrait;
}
return true;
}
当我将方向设置为 LandscapeRight 时,我没有遇到这个问题。每行的字节等于宽度*通道。
这是我获取上述数字的地方:
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
}
确定这是图像的一部分 "stride"。如果图像宽度不能被所选的内存分配整除,则包括此额外填充。当我收到肖像图像时,它是 360x480。由于 360 不能被 16 整除,因此添加 8 个额外字节作为填充。在这种情况下,16 是内存 space。
我以前没有遇到过这个问题,因为 480 可以被 16 整除。
您可以拨打 CVPixelBufferGetBytesPerRowOfPlane (imageBuffer, 1);
获取此号码
但奇怪的是,它 returns 第一次为 0,第二次为 1,依此类推,直到达到实际缓冲区级别 (8)。然后在第九张图片上再次 returns 0 。
根据此页面上的 rpappalax http://gstreamer-devel.966125.n4.nabble.com/iOS-capture-problem-td4656685.html
The stride is effectively CVPixelBufferGetBytesPerRowOfPlane() and
includes padding (if any). When no padding is present
CVPixelBufferGetBytesPerRowOfPlane() will be equal to
CVPixelBufferGetWidth(), otherwise it'll be greater.
虽然那不是我的经验。
当我从 - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
得到一帧时,我得到了以下数据:
- 每行字节数:1,472 长度:706,560 高度:480 宽度:360 格式: BGRA
这是来自 iPhone 6 plus 的前置摄像头。 这没有意义,因为每行的字节数应该是 (width * channels)(在这种情况下,channels 是 4)。但是,它是 (width+8)*channels。这额外的 8 个字节来自哪里?
这是我的代码: 将输出附加到会话,我将方向设置为纵向
bool attachOutputToSession(AVCaptureSession *session, id cameraDelegate)
{
assert(cameraDelegate);
AVCaptureVideoDataOutput *m_videoOutput = [[AVCaptureVideoDataOutput alloc] init];
//create a queue for capturing frames
dispatch_queue_t captureQueue = dispatch_queue_create("captureQueue", DISPATCH_QUEUE_SERIAL);
//Use the AVCaptureVideoDataOutputSampleBufferDelegate capabilities of CameraDelegate:
[m_videoOutput setSampleBufferDelegate:cameraDelegate queue:captureQueue];
//setup the video outputs
m_videoOutput.alwaysDiscardsLateVideoFrames = YES;
NSNumber *framePixelFormat = [NSNumber numberWithInt:kCVPixelFormatType_32BGRA];//This crashes with 24RGB b/c that isn't supported on iPhone
m_videoOutput.videoSettings = [ NSDictionary dictionaryWithObject:framePixelFormat forKey:(id)kCVPixelBufferPixelFormatTypeKey];
//Check if it already has an output from a previous session
if ([session canAddOutput:m_videoOutput])
{
[session addOutput:m_videoOutput];
}
//set connection settings
for (AVCaptureConnection *connection in m_videoOutput.connections)
{
if (connection.isVideoMirroringSupported)
connection.videoMirrored = true;
if (connection.isVideoOrientationSupported)
connection.videoOrientation = AVCaptureVideoOrientationPortrait;
}
return true;
}
当我将方向设置为 LandscapeRight 时,我没有遇到这个问题。每行的字节等于宽度*通道。
这是我获取上述数字的地方:
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
}
确定这是图像的一部分 "stride"。如果图像宽度不能被所选的内存分配整除,则包括此额外填充。当我收到肖像图像时,它是 360x480。由于 360 不能被 16 整除,因此添加 8 个额外字节作为填充。在这种情况下,16 是内存 space。
我以前没有遇到过这个问题,因为 480 可以被 16 整除。
您可以拨打 CVPixelBufferGetBytesPerRowOfPlane (imageBuffer, 1);
获取此号码
但奇怪的是,它 returns 第一次为 0,第二次为 1,依此类推,直到达到实际缓冲区级别 (8)。然后在第九张图片上再次 returns 0 。
根据此页面上的 rpappalax http://gstreamer-devel.966125.n4.nabble.com/iOS-capture-problem-td4656685.html
The stride is effectively CVPixelBufferGetBytesPerRowOfPlane() and includes padding (if any). When no padding is present CVPixelBufferGetBytesPerRowOfPlane() will be equal to CVPixelBufferGetWidth(), otherwise it'll be greater.
虽然那不是我的经验。