我如何处理 GPUImage 图像缓冲区,以便它们可用于 Tokbox 之类的东西?
How do I handle GPUImage image buffers so that they're usable with things like Tokbox?
我正在使用 OpenTok 并将其 Publisher 替换为我自己的包含 GPUImage 的子类版本。我的目标是添加过滤器。
应用程序构建并 运行s,但在此处崩溃:
func willOutputSampleBuffer(sampleBuffer: CMSampleBuffer!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer!, 0)
videoFrame?.clearPlanes()
for var i = 0 ; i < CVPixelBufferGetPlaneCount(imageBuffer!); i++ {
print(i)
videoFrame?.planes.addPointer(CVPixelBufferGetBaseAddressOfPlane(imageBuffer!, i))
}
videoFrame?.orientation = OTVideoOrientation.Left
videoCaptureConsumer.consumeFrame(videoFrame) //comment this out to stop app from crashing. Otherwise, it crashes here.
CVPixelBufferUnlockBaseAddress(imageBuffer!, 0)
}
如果我注释掉该行,我就可以 运行 该应用程序而不会崩溃。事实上,我看到滤镜应用正确,但它在闪烁。没有任何内容发布到 Opentok。
我的整个代码库都可以下载。点此查看具体文件:This is the specific file for the class。 运行 实际上很容易 - 只需在 运行 之前安装 pod。
经检查,可能是videoCaptureConsumer
没有初始化。 Protocol reference
我不知道我的代码是什么意思。我直接从这个 objective C 文件翻译它:Tokbox's sample project
我分析了您的 Swift
-项目和 Objective-C
-项目。
我发现,两者都不起作用。
有了这个 post,我想进行第一次更新并展示一个真正有效的演示,说明如何将 GPU 图像过滤器与 OpenTok 结合使用。
您的 GPUImage 文件管理器实现不支持 OpenTok 的原因
#1 多个目标规范
let sepia = GPUImageSepiaFilter()
videoCamera?.addTarget(sepia)
sepia.addTarget(self.view)
videoCamera?.addTarget(self.view) // <-- This is wrong and produces the flickering
videoCamera?.startCameraCapture()
两个源试图渲染到同一个视图中。使事物闪烁...
第一部分已解决。 接下来:为什么没有任何内容发布到 OpenTok? 为了找到这个原因,我决定从 "working" Objective-C
版本开始。
#2 -Objective-C 原始代码库
原始 Objective-C
版本没有预期的功能。 GPUImageVideoCamera
向 OpenTok
订阅者的发布有效,但不涉及过滤。这就是您的核心要求。
关键是,添加滤镜并不像人们想象的那么简单,因为不同的图像格式和不同的异步编程机制。
所以 原因 #2,为什么您的代码没有按预期工作:
您移植工作的参考代码库不正确。它不允许在发布 - 订阅者管道之间放置 GPU 过滤器。
有效的Objective-C实现
我修改了Objective-C版本。当前结果如下所示:
[![在此处输入图片描述][1]][1]
运行顺利。
最后的步骤
这是自定义 Tok
发布商的完整代码。它基本上是来自 [https://github.com/JayTokBox/TokBoxGPUImage/blob/master/TokBoxGPUImage/ViewController.m][2] 的原始代码 (TokBoxGPUImagePublisher
),并进行了以下显着修改:
OTVideoFrame 以新格式实例化
...
format = [[OTVideoFormat alloc] init];
format.pixelFormat = OTPixelFormatARGB;
format.bytesPerRow = [@[@(imageWidth * 4)] mutableCopy];
format.imageWidth = imageWidth;
format.imageHeight = imageHeight;
videoFrame = [[OTVideoFrame alloc] initWithFormat: format];
...
替换WillOutputSampleBuffer
回调机制
此回调仅在直接来自 GPUImageVideoCamera
而不是来自您的自定义过滤器的样本缓冲区准备就绪时触发。 GPUImageFilters
不提供这样的回调/委托机制。这就是为什么我们在两者之间放置一个 GPUImageRawDataOutput
并要求它提供准备好的图像。此管道在 initCapture
方法中实现,如下所示:
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
sepiaImageFilter = [[GPUImageSepiaFilter alloc] init];
[videoCamera addTarget:sepiaImageFilter];
// Create rawOut
CGSize size = CGSizeMake(imageWidth, imageHeight);
rawOut = [[GPUImageRawDataOutput alloc] initWithImageSize:size resultsInBGRAFormat:YES];
// Filter into rawOut
[sepiaImageFilter addTarget:rawOut];
// Handle filtered images
// We need a weak reference here to avoid a strong reference cycle.
__weak GPUImageRawDataOutput* weakRawOut = self->rawOut;
__weak OTVideoFrame* weakVideoFrame = self->videoFrame;
__weak id<OTVideoCaptureConsumer> weakVideoCaptureConsumer = self.videoCaptureConsumer;
//
[rawOut setNewFrameAvailableBlock:^{
[weakRawOut lockFramebufferForReading];
// GLubyte is an uint8_t
GLubyte* outputBytes = [weakRawOut rawBytesForImage];
// About the video formats used by OTVideoFrame
// --------------------------------------------
// Both YUV video formats (i420, NV12) have the (for us) following important properties:
//
// - Two planes
// - 8 bit Y plane
// - 8 bit 2x2 subsampled U and V planes (1/4 the pixels of the Y plane)
// --> 12 bits per pixel
//
// Further reading: www.fourcc.org/yuv.php
//
[weakVideoFrame clearPlanes];
[weakVideoFrame.planes addPointer: outputBytes];
[weakVideoCaptureConsumer consumeFrame: weakVideoFrame];
[weakRawOut unlockFramebufferAfterReading];
}];
[videoCamera addTarget:self.view];
[videoCamera startCameraCapture];
完整代码(真正重要的是initCapture)
//
// TokBoxGPUImagePublisher.m
// TokBoxGPUImage
//
// Created by Jaideep Shah on 9/5/14.
// Copyright (c) 2014 Jaideep Shah. All rights reserved.
//
#import "TokBoxGPUImagePublisher.h"
#import "GPUImage.h"
static size_t imageHeight = 480;
static size_t imageWidth = 640;
@interface TokBoxGPUImagePublisher() <GPUImageVideoCameraDelegate, OTVideoCapture> {
GPUImageVideoCamera *videoCamera;
GPUImageSepiaFilter *sepiaImageFilter;
OTVideoFrame* videoFrame;
GPUImageRawDataOutput* rawOut;
OTVideoFormat* format;
}
@end
@implementation TokBoxGPUImagePublisher
@synthesize videoCaptureConsumer ; // In OTVideoCapture protocol
- (id)initWithDelegate:(id<OTPublisherDelegate>)delegate name:(NSString*)name
{
self = [super initWithDelegate:delegate name:name];
if (self)
{
self.view = [[GPUImageView alloc] initWithFrame:CGRectMake(0, 0, 1, 1)];
[self setVideoCapture:self];
format = [[OTVideoFormat alloc] init];
format.pixelFormat = OTPixelFormatARGB;
format.bytesPerRow = [@[@(imageWidth * 4)] mutableCopy];
format.imageWidth = imageWidth;
format.imageHeight = imageHeight;
videoFrame = [[OTVideoFrame alloc] initWithFormat: format];
}
return self;
}
#pragma mark GPUImageVideoCameraDelegate
- (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
[videoFrame clearPlanes];
for (int i = 0; i < CVPixelBufferGetPlaneCount(imageBuffer); i++) {
[videoFrame.planes addPointer:CVPixelBufferGetBaseAddressOfPlane(imageBuffer, i)];
}
videoFrame.orientation = OTVideoOrientationLeft;
[self.videoCaptureConsumer consumeFrame:videoFrame];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
#pragma mark OTVideoCapture
- (void) initCapture {
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480
cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
sepiaImageFilter = [[GPUImageSepiaFilter alloc] init];
[videoCamera addTarget:sepiaImageFilter];
// Create rawOut
CGSize size = CGSizeMake(imageWidth, imageHeight);
rawOut = [[GPUImageRawDataOutput alloc] initWithImageSize:size resultsInBGRAFormat:YES];
// Filter into rawOut
[sepiaImageFilter addTarget:rawOut];
// Handle filtered images
// We need a weak reference here to avoid a strong reference cycle.
__weak GPUImageRawDataOutput* weakRawOut = self->rawOut;
__weak OTVideoFrame* weakVideoFrame = self->videoFrame;
__weak id<OTVideoCaptureConsumer> weakVideoCaptureConsumer = self.videoCaptureConsumer;
//
[rawOut setNewFrameAvailableBlock:^{
[weakRawOut lockFramebufferForReading];
// GLubyte is an uint8_t
GLubyte* outputBytes = [weakRawOut rawBytesForImage];
// About the video formats used by OTVideoFrame
// --------------------------------------------
// Both YUV video formats (i420, NV12) have the (for us) following important properties:
//
// - Two planes
// - 8 bit Y plane
// - 8 bit 2x2 subsampled U and V planes (1/4 the pixels of the Y plane)
// --> 12 bits per pixel
//
// Further reading: www.fourcc.org/yuv.php
//
[weakVideoFrame clearPlanes];
[weakVideoFrame.planes addPointer: outputBytes];
[weakVideoCaptureConsumer consumeFrame: weakVideoFrame];
[weakRawOut unlockFramebufferAfterReading];
}];
[videoCamera addTarget:self.view];
[videoCamera startCameraCapture];
}
- (void)releaseCapture
{
videoCamera.delegate = nil;
videoCamera = nil;
}
- (int32_t) startCapture {
return 0;
}
- (int32_t) stopCapture {
return 0;
}
- (BOOL) isCaptureStarted {
return YES;
}
- (int32_t)captureSettings:(OTVideoFormat*)videoFormat {
videoFormat.pixelFormat = OTPixelFormatNV12;
videoFormat.imageWidth = imageWidth;
videoFormat.imageHeight = imageHeight;
return 0;
}
@end
我正在使用 OpenTok 并将其 Publisher 替换为我自己的包含 GPUImage 的子类版本。我的目标是添加过滤器。
应用程序构建并 运行s,但在此处崩溃:
func willOutputSampleBuffer(sampleBuffer: CMSampleBuffer!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer!, 0)
videoFrame?.clearPlanes()
for var i = 0 ; i < CVPixelBufferGetPlaneCount(imageBuffer!); i++ {
print(i)
videoFrame?.planes.addPointer(CVPixelBufferGetBaseAddressOfPlane(imageBuffer!, i))
}
videoFrame?.orientation = OTVideoOrientation.Left
videoCaptureConsumer.consumeFrame(videoFrame) //comment this out to stop app from crashing. Otherwise, it crashes here.
CVPixelBufferUnlockBaseAddress(imageBuffer!, 0)
}
如果我注释掉该行,我就可以 运行 该应用程序而不会崩溃。事实上,我看到滤镜应用正确,但它在闪烁。没有任何内容发布到 Opentok。
我的整个代码库都可以下载。点此查看具体文件:This is the specific file for the class。 运行 实际上很容易 - 只需在 运行 之前安装 pod。
经检查,可能是videoCaptureConsumer
没有初始化。 Protocol reference
我不知道我的代码是什么意思。我直接从这个 objective C 文件翻译它:Tokbox's sample project
我分析了您的 Swift
-项目和 Objective-C
-项目。
我发现,两者都不起作用。
有了这个 post,我想进行第一次更新并展示一个真正有效的演示,说明如何将 GPU 图像过滤器与 OpenTok 结合使用。
您的 GPUImage 文件管理器实现不支持 OpenTok 的原因
#1 多个目标规范
let sepia = GPUImageSepiaFilter()
videoCamera?.addTarget(sepia)
sepia.addTarget(self.view)
videoCamera?.addTarget(self.view) // <-- This is wrong and produces the flickering
videoCamera?.startCameraCapture()
两个源试图渲染到同一个视图中。使事物闪烁...
第一部分已解决。 接下来:为什么没有任何内容发布到 OpenTok? 为了找到这个原因,我决定从 "working" Objective-C
版本开始。
#2 -Objective-C 原始代码库
原始 Objective-C
版本没有预期的功能。 GPUImageVideoCamera
向 OpenTok
订阅者的发布有效,但不涉及过滤。这就是您的核心要求。
关键是,添加滤镜并不像人们想象的那么简单,因为不同的图像格式和不同的异步编程机制。
所以 原因 #2,为什么您的代码没有按预期工作: 您移植工作的参考代码库不正确。它不允许在发布 - 订阅者管道之间放置 GPU 过滤器。
有效的Objective-C实现
我修改了Objective-C版本。当前结果如下所示:
[![在此处输入图片描述][1]][1]
运行顺利。
最后的步骤
这是自定义 Tok
发布商的完整代码。它基本上是来自 [https://github.com/JayTokBox/TokBoxGPUImage/blob/master/TokBoxGPUImage/ViewController.m][2] 的原始代码 (TokBoxGPUImagePublisher
),并进行了以下显着修改:
OTVideoFrame 以新格式实例化
...
format = [[OTVideoFormat alloc] init];
format.pixelFormat = OTPixelFormatARGB;
format.bytesPerRow = [@[@(imageWidth * 4)] mutableCopy];
format.imageWidth = imageWidth;
format.imageHeight = imageHeight;
videoFrame = [[OTVideoFrame alloc] initWithFormat: format];
...
替换WillOutputSampleBuffer
回调机制
此回调仅在直接来自 GPUImageVideoCamera
而不是来自您的自定义过滤器的样本缓冲区准备就绪时触发。 GPUImageFilters
不提供这样的回调/委托机制。这就是为什么我们在两者之间放置一个 GPUImageRawDataOutput
并要求它提供准备好的图像。此管道在 initCapture
方法中实现,如下所示:
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
sepiaImageFilter = [[GPUImageSepiaFilter alloc] init];
[videoCamera addTarget:sepiaImageFilter];
// Create rawOut
CGSize size = CGSizeMake(imageWidth, imageHeight);
rawOut = [[GPUImageRawDataOutput alloc] initWithImageSize:size resultsInBGRAFormat:YES];
// Filter into rawOut
[sepiaImageFilter addTarget:rawOut];
// Handle filtered images
// We need a weak reference here to avoid a strong reference cycle.
__weak GPUImageRawDataOutput* weakRawOut = self->rawOut;
__weak OTVideoFrame* weakVideoFrame = self->videoFrame;
__weak id<OTVideoCaptureConsumer> weakVideoCaptureConsumer = self.videoCaptureConsumer;
//
[rawOut setNewFrameAvailableBlock:^{
[weakRawOut lockFramebufferForReading];
// GLubyte is an uint8_t
GLubyte* outputBytes = [weakRawOut rawBytesForImage];
// About the video formats used by OTVideoFrame
// --------------------------------------------
// Both YUV video formats (i420, NV12) have the (for us) following important properties:
//
// - Two planes
// - 8 bit Y plane
// - 8 bit 2x2 subsampled U and V planes (1/4 the pixels of the Y plane)
// --> 12 bits per pixel
//
// Further reading: www.fourcc.org/yuv.php
//
[weakVideoFrame clearPlanes];
[weakVideoFrame.planes addPointer: outputBytes];
[weakVideoCaptureConsumer consumeFrame: weakVideoFrame];
[weakRawOut unlockFramebufferAfterReading];
}];
[videoCamera addTarget:self.view];
[videoCamera startCameraCapture];
完整代码(真正重要的是initCapture)
//
// TokBoxGPUImagePublisher.m
// TokBoxGPUImage
//
// Created by Jaideep Shah on 9/5/14.
// Copyright (c) 2014 Jaideep Shah. All rights reserved.
//
#import "TokBoxGPUImagePublisher.h"
#import "GPUImage.h"
static size_t imageHeight = 480;
static size_t imageWidth = 640;
@interface TokBoxGPUImagePublisher() <GPUImageVideoCameraDelegate, OTVideoCapture> {
GPUImageVideoCamera *videoCamera;
GPUImageSepiaFilter *sepiaImageFilter;
OTVideoFrame* videoFrame;
GPUImageRawDataOutput* rawOut;
OTVideoFormat* format;
}
@end
@implementation TokBoxGPUImagePublisher
@synthesize videoCaptureConsumer ; // In OTVideoCapture protocol
- (id)initWithDelegate:(id<OTPublisherDelegate>)delegate name:(NSString*)name
{
self = [super initWithDelegate:delegate name:name];
if (self)
{
self.view = [[GPUImageView alloc] initWithFrame:CGRectMake(0, 0, 1, 1)];
[self setVideoCapture:self];
format = [[OTVideoFormat alloc] init];
format.pixelFormat = OTPixelFormatARGB;
format.bytesPerRow = [@[@(imageWidth * 4)] mutableCopy];
format.imageWidth = imageWidth;
format.imageHeight = imageHeight;
videoFrame = [[OTVideoFrame alloc] initWithFormat: format];
}
return self;
}
#pragma mark GPUImageVideoCameraDelegate
- (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
[videoFrame clearPlanes];
for (int i = 0; i < CVPixelBufferGetPlaneCount(imageBuffer); i++) {
[videoFrame.planes addPointer:CVPixelBufferGetBaseAddressOfPlane(imageBuffer, i)];
}
videoFrame.orientation = OTVideoOrientationLeft;
[self.videoCaptureConsumer consumeFrame:videoFrame];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
#pragma mark OTVideoCapture
- (void) initCapture {
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480
cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
sepiaImageFilter = [[GPUImageSepiaFilter alloc] init];
[videoCamera addTarget:sepiaImageFilter];
// Create rawOut
CGSize size = CGSizeMake(imageWidth, imageHeight);
rawOut = [[GPUImageRawDataOutput alloc] initWithImageSize:size resultsInBGRAFormat:YES];
// Filter into rawOut
[sepiaImageFilter addTarget:rawOut];
// Handle filtered images
// We need a weak reference here to avoid a strong reference cycle.
__weak GPUImageRawDataOutput* weakRawOut = self->rawOut;
__weak OTVideoFrame* weakVideoFrame = self->videoFrame;
__weak id<OTVideoCaptureConsumer> weakVideoCaptureConsumer = self.videoCaptureConsumer;
//
[rawOut setNewFrameAvailableBlock:^{
[weakRawOut lockFramebufferForReading];
// GLubyte is an uint8_t
GLubyte* outputBytes = [weakRawOut rawBytesForImage];
// About the video formats used by OTVideoFrame
// --------------------------------------------
// Both YUV video formats (i420, NV12) have the (for us) following important properties:
//
// - Two planes
// - 8 bit Y plane
// - 8 bit 2x2 subsampled U and V planes (1/4 the pixels of the Y plane)
// --> 12 bits per pixel
//
// Further reading: www.fourcc.org/yuv.php
//
[weakVideoFrame clearPlanes];
[weakVideoFrame.planes addPointer: outputBytes];
[weakVideoCaptureConsumer consumeFrame: weakVideoFrame];
[weakRawOut unlockFramebufferAfterReading];
}];
[videoCamera addTarget:self.view];
[videoCamera startCameraCapture];
}
- (void)releaseCapture
{
videoCamera.delegate = nil;
videoCamera = nil;
}
- (int32_t) startCapture {
return 0;
}
- (int32_t) stopCapture {
return 0;
}
- (BOOL) isCaptureStarted {
return YES;
}
- (int32_t)captureSettings:(OTVideoFormat*)videoFormat {
videoFormat.pixelFormat = OTPixelFormatNV12;
videoFormat.imageWidth = imageWidth;
videoFormat.imageHeight = imageHeight;
return 0;
}
@end