如何从 ARKit 人脸会话深度像素缓冲区创建视频
How to create video from ARKit face session depth pixel buffer
我正在尝试将 frame.capturedDepthData.depthDataMap
附加到 AVAssetWriterInputPixelBufferAdaptor
,但结果总是不成功。
我的适配器是这样配置的:
NSError* error;
videoWriter = [AVAssetWriter.alloc initWithURL:outputURL fileType:AVFileTypeMPEG4 error:&error];
if (error)
{
NSLog(@"Error creating video writer: %@", error);
return;
}
NSDictionary* videoSettings = @{
AVVideoCodecKey: AVVideoCodecTypeH264,
AVVideoWidthKey: @640,
AVVideoHeightKey: @360
};
writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
writerInput.transform = CGAffineTransformMakeRotation(M_PI_2);
NSDictionary* sourcePixelBufferAttributesDictionary = @{
(NSString*) kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_DepthFloat32)
};
adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
if ([videoWriter canAddInput:writerInput])
{
[videoWriter addInput:writerInput];
}
else
{
NSLog(@"Error: cannot add writerInput to videoWriter.");
}
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
然后在每个 session:(ARSession*)session didUpdateFrame:(ARFrame*)frame
回调中,我尝试像这样附加深度像素缓冲区:
if (!adaptor.assetWriterInput.readyForMoreMediaData)
{
NSLog(@"Asset input writer is not ready for more media data!");
}
else
{
if (frame.capturedDepthData.depthDataMap != NULL)
{
frameCount++;
CVPixelBufferRef pixelRef = frame.capturedDepthData.depthDataMap;
BOOL result = [adaptor appendPixelBuffer:frame.capturedDepthData.depthDataMap withPresentationTime:CMTimeMake(frameCount, 15)];
}
}
但附加像素缓冲区的结果始终为 FALSE。
现在,如果我尝试将 frame.capturedImage
附加到正确配置的适配器,那总是会成功,这就是我目前从前置摄像头制作视频文件的方式。
但我想知道如何从深度像素缓冲区制作视频?
下面是一个示例,说明如何将 depthDataMap 像素缓冲区转换为有效附加到适配器的像素缓冲区:
- (void) session:(ARSession*)session didUpdateFrame:(ARFrame*)frame
{
CVPixelBufferRef depthDataMap = frame.capturedDepthData.depthDataMap;
if(!depthDataMap)
{
// no depth data available
return;
}
CIImage* image = [CIImage imageWithCVPixelBuffer:depthDataMap];
CVPixelBufferRef buffer = NULL;
CVReturn err = PixelBufferCreateFromImage(image, &buffer);
[adaptorDepth appendPixelBuffer:buffer withPresentationTime:CMTimeMake(frameDepthCount, 15)] // 15 is number of fps
}
CVReturn PixelBufferCreateFromImage(CIImage* ciImage, CVPixelBufferRef* outBuffer) {
CIContext* context = [CIContext context];
NSDictionary* attributes = @{ (NSString*) kCVPixelBufferCGBitmapContextCompatibilityKey: @YES,
(NSString*) kCVPixelBufferCGImageCompatibilityKey: @YES
};
CVReturn err = CVPixelBufferCreate(kCFAllocatorDefault,
(size_t) ciImage.extent.size.width, (size_t) ciImage.extent.size.height,
kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef _Nullable) (attributes),
outBuffer);
if (err)
{
return err;
}
if (outBuffer)
{
[context render:ciImage toCVPixelBuffer:*outBuffer];
}
return kCVReturnSuccess;
}
关键在于 PixelBufferCreateFromImage
方法,该方法能够从原始深度像素缓冲区的 CIImage
创建有效的像素缓冲区。
我正在尝试将 frame.capturedDepthData.depthDataMap
附加到 AVAssetWriterInputPixelBufferAdaptor
,但结果总是不成功。
我的适配器是这样配置的:
NSError* error;
videoWriter = [AVAssetWriter.alloc initWithURL:outputURL fileType:AVFileTypeMPEG4 error:&error];
if (error)
{
NSLog(@"Error creating video writer: %@", error);
return;
}
NSDictionary* videoSettings = @{
AVVideoCodecKey: AVVideoCodecTypeH264,
AVVideoWidthKey: @640,
AVVideoHeightKey: @360
};
writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
writerInput.transform = CGAffineTransformMakeRotation(M_PI_2);
NSDictionary* sourcePixelBufferAttributesDictionary = @{
(NSString*) kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_DepthFloat32)
};
adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
if ([videoWriter canAddInput:writerInput])
{
[videoWriter addInput:writerInput];
}
else
{
NSLog(@"Error: cannot add writerInput to videoWriter.");
}
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
然后在每个 session:(ARSession*)session didUpdateFrame:(ARFrame*)frame
回调中,我尝试像这样附加深度像素缓冲区:
if (!adaptor.assetWriterInput.readyForMoreMediaData)
{
NSLog(@"Asset input writer is not ready for more media data!");
}
else
{
if (frame.capturedDepthData.depthDataMap != NULL)
{
frameCount++;
CVPixelBufferRef pixelRef = frame.capturedDepthData.depthDataMap;
BOOL result = [adaptor appendPixelBuffer:frame.capturedDepthData.depthDataMap withPresentationTime:CMTimeMake(frameCount, 15)];
}
}
但附加像素缓冲区的结果始终为 FALSE。
现在,如果我尝试将 frame.capturedImage
附加到正确配置的适配器,那总是会成功,这就是我目前从前置摄像头制作视频文件的方式。
但我想知道如何从深度像素缓冲区制作视频?
下面是一个示例,说明如何将 depthDataMap 像素缓冲区转换为有效附加到适配器的像素缓冲区:
- (void) session:(ARSession*)session didUpdateFrame:(ARFrame*)frame
{
CVPixelBufferRef depthDataMap = frame.capturedDepthData.depthDataMap;
if(!depthDataMap)
{
// no depth data available
return;
}
CIImage* image = [CIImage imageWithCVPixelBuffer:depthDataMap];
CVPixelBufferRef buffer = NULL;
CVReturn err = PixelBufferCreateFromImage(image, &buffer);
[adaptorDepth appendPixelBuffer:buffer withPresentationTime:CMTimeMake(frameDepthCount, 15)] // 15 is number of fps
}
CVReturn PixelBufferCreateFromImage(CIImage* ciImage, CVPixelBufferRef* outBuffer) {
CIContext* context = [CIContext context];
NSDictionary* attributes = @{ (NSString*) kCVPixelBufferCGBitmapContextCompatibilityKey: @YES,
(NSString*) kCVPixelBufferCGImageCompatibilityKey: @YES
};
CVReturn err = CVPixelBufferCreate(kCFAllocatorDefault,
(size_t) ciImage.extent.size.width, (size_t) ciImage.extent.size.height,
kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef _Nullable) (attributes),
outBuffer);
if (err)
{
return err;
}
if (outBuffer)
{
[context render:ciImage toCVPixelBuffer:*outBuffer];
}
return kCVReturnSuccess;
}
关键在于 PixelBufferCreateFromImage
方法,该方法能够从原始深度像素缓冲区的 CIImage
创建有效的像素缓冲区。