使用 Core Image 从查找图像中过滤图像
Filter image from lookup image with Core Image
我正在尝试为我的应用程序中的图像开发一些滤色器。我们使用查找图像,因为它们很容易用于从其他程序复制过滤器,并且可以方便地跨平台获得相同的结果。我们的查询看起来像这样
之前,我们使用 GPUImage
来过滤查找图像,但我想避免这种依赖性,因为它高达 5.4mb,而我们只需要这一功能。
搜索了几个小时后,我似乎找不到任何关于如何使用查找图像通过 CoreImage
过滤图像的资源。但是,查看文档,CIColorMatrix
看起来是合适的工具。这里的问题是我太笨了,无法理解这是如何工作的。这让我想到了我的问题;
有没有人有关于如何使用 CIColorMatrix
从查找中过滤图像的示例?(或关于我应该如何继续弄清楚的任何指示我自己)
我已经抓取了 GPUImage
的代码,看起来他们用来过滤查找图像的着色器定义如下:
查找图像着色器:
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
// lookup texture uniform float intensity;
void main() {
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
float blueColor = textureColor.b * 63.0;
vec2 quad1;
quad1.y = floor(floor(blueColor) / 8.0);
quad1.x = floor(blueColor) - (quad1.y * 8.0);
vec2 quad2;
quad2.y = floor(ceil(blueColor) / 8.0);
quad2.x = ceil(blueColor) - (quad2.y * 8.0);
vec2 texPos1;
texPos1.x = (quad1.x * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.r);
texPos1.y = (quad1.y * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.g);
vec2 texPos2;
texPos2.x = (quad2.x * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.r);
texPos2.y = (quad2.y * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.g);
vec4 newColor1 = texture2D(inputImageTexture2, texPos1);
vec4 newColor2 = texture2D(inputImageTexture2, texPos2);
vec4 newColor = mix(newColor1, newColor2, fract(blueColor));
gl_FragColor = mix(textureColor, vec4(newColor.rgb, textureColor.w), intensity);
}
除了这个顶点着色器:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
attribute vec4 inputTextureCoordinate2;
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
void main() {
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
textureCoordinate2 = inputTextureCoordinate2.xy;
}
Can/should 我宁愿使用这些着色器创建自己的滤镜?
这个答案的所有功劳都归功于 Nghia Tran。如果您看到了,谢谢!
事实证明,一直只有一个答案。 Nghia Tran 写了一篇文章 here 解决了我的确切用例。
他友善地提供了一个扩展,可以从查找图像生成 CIFilter
,我将在下面粘贴该扩展,以便为未来的开发人员保留这个答案。
如果您使用 Swift,则需要在桥接头中导入 CIFilter+LUT.h
。
这是一个片段,演示了在 Swift 4 中如何在 GPU 上使用它。这远未优化,上下文等应该被缓存,但它是一个很好的起点。
static func applyFilter(with lookupImage: UIImage, to image: UIImage) -> UIImage? {
guard let cgInputImage = image.cgImage else {
return nil
}
guard let glContext = EAGLContext(api: .openGLES2) else {
return nil
}
let ciContext = CIContext(eaglContext: glContext)
guard let lookupFilter = CIFilter(lookupImage: lookupImage, dimension: 64) else {
return nil
}
lookupFilter.setValue(CIImage(cgImage: cgInputImage),
forKey: "inputImage")
guard let output = lookupFilter.outputImage else {
return nil
}
guard let cgOutputImage = ciContext.createCGImage(output, from: output.extent) else {
return nil
}
return UIImage(cgImage: cgOutputImage)
}
CIFilter+LUT.h
#import <CoreImage/CoreImage.h>
@import UIKit.UIImage;
@class CIFilter;
@interface CIFilter (LUT)
+(CIFilter *) filterWithLookupImage:(UIImage *)image dimension:(NSInteger) n;
@end
CIFilter+LUT.m
#import "CIFilter+LUT.h"
#import <CoreImage/CoreImage.h>
#import <OpenGLES/EAGL.h>
@implementation CIFilter (LUT)
+(CIFilter *)filterWithLookupImage:(UIImage *)image dimension:(NSInteger)n {
NSInteger width = CGImageGetWidth(image.CGImage);
NSInteger height = CGImageGetHeight(image.CGImage);
NSInteger rowNum = height / n;
NSInteger columnNum = width / n;
if ((width % n != 0) || (height % n != 0) || (rowNum * columnNum != n)) {
NSLog(@"Invalid colorLUT");
return nil;
}
unsigned char *bitmap = [self createRGBABitmapFromImage:image.CGImage];
if (bitmap == NULL) {
return nil;
}
NSInteger size = n * n * n * sizeof(float) * 4;
float *data = malloc(size);
int bitmapOffest = 0;
int z = 0;
for (int row = 0; row < rowNum; row++) {
for (int y = 0; y < n; y++) {
int tmp = z;
for (int col = 0; col < columnNum; col++) {
for (int x = 0; x < n; x++) {
float r = (unsigned int)bitmap[bitmapOffest];
float g = (unsigned int)bitmap[bitmapOffest + 1];
float b = (unsigned int)bitmap[bitmapOffest + 2];
float a = (unsigned int)bitmap[bitmapOffest + 3];
NSInteger dataOffset = (z*n*n + y*n + x) * 4;
data[dataOffset] = r / 255.0;
data[dataOffset + 1] = g / 255.0;
data[dataOffset + 2] = b / 255.0;
data[dataOffset + 3] = a / 255.0;
bitmapOffest += 4;
}
z++;
}
z = tmp;
}
z += columnNum;
}
free(bitmap);
CIFilter *filter = [CIFilter filterWithName:@"CIColorCube"];
[filter setValue:[NSData dataWithBytesNoCopy:data length:size freeWhenDone:YES] forKey:@"inputCubeData"];
[filter setValue:[NSNumber numberWithInteger:n] forKey:@"inputCubeDimension"];
return filter;
}
+ (unsigned char *)createRGBABitmapFromImage:(CGImageRef)image {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
unsigned char *bitmap;
NSInteger bitmapSize;
NSInteger bytesPerRow;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
bytesPerRow = (width * 4);
bitmapSize = (bytesPerRow * height);
bitmap = malloc(bitmapSize);
if (bitmap == NULL) {
return NULL;
}
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
free(bitmap);
return NULL;
}
context = CGBitmapContextCreate (bitmap, width, height, 8, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
free (bitmap);
}
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGContextRelease(context);
return bitmap;
}
@end
我正在尝试为我的应用程序中的图像开发一些滤色器。我们使用查找图像,因为它们很容易用于从其他程序复制过滤器,并且可以方便地跨平台获得相同的结果。我们的查询看起来像这样
之前,我们使用 GPUImage
来过滤查找图像,但我想避免这种依赖性,因为它高达 5.4mb,而我们只需要这一功能。
搜索了几个小时后,我似乎找不到任何关于如何使用查找图像通过 CoreImage
过滤图像的资源。但是,查看文档,CIColorMatrix
看起来是合适的工具。这里的问题是我太笨了,无法理解这是如何工作的。这让我想到了我的问题;
有没有人有关于如何使用 CIColorMatrix
从查找中过滤图像的示例?(或关于我应该如何继续弄清楚的任何指示我自己)
我已经抓取了 GPUImage
的代码,看起来他们用来过滤查找图像的着色器定义如下:
查找图像着色器:
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
// lookup texture uniform float intensity;
void main() {
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
float blueColor = textureColor.b * 63.0;
vec2 quad1;
quad1.y = floor(floor(blueColor) / 8.0);
quad1.x = floor(blueColor) - (quad1.y * 8.0);
vec2 quad2;
quad2.y = floor(ceil(blueColor) / 8.0);
quad2.x = ceil(blueColor) - (quad2.y * 8.0);
vec2 texPos1;
texPos1.x = (quad1.x * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.r);
texPos1.y = (quad1.y * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.g);
vec2 texPos2;
texPos2.x = (quad2.x * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.r);
texPos2.y = (quad2.y * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.g);
vec4 newColor1 = texture2D(inputImageTexture2, texPos1);
vec4 newColor2 = texture2D(inputImageTexture2, texPos2);
vec4 newColor = mix(newColor1, newColor2, fract(blueColor));
gl_FragColor = mix(textureColor, vec4(newColor.rgb, textureColor.w), intensity);
}
除了这个顶点着色器:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
attribute vec4 inputTextureCoordinate2;
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
void main() {
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
textureCoordinate2 = inputTextureCoordinate2.xy;
}
Can/should 我宁愿使用这些着色器创建自己的滤镜?
这个答案的所有功劳都归功于 Nghia Tran。如果您看到了,谢谢!
事实证明,一直只有一个答案。 Nghia Tran 写了一篇文章 here 解决了我的确切用例。
他友善地提供了一个扩展,可以从查找图像生成 CIFilter
,我将在下面粘贴该扩展,以便为未来的开发人员保留这个答案。
如果您使用 Swift,则需要在桥接头中导入 CIFilter+LUT.h
。
这是一个片段,演示了在 Swift 4 中如何在 GPU 上使用它。这远未优化,上下文等应该被缓存,但它是一个很好的起点。
static func applyFilter(with lookupImage: UIImage, to image: UIImage) -> UIImage? {
guard let cgInputImage = image.cgImage else {
return nil
}
guard let glContext = EAGLContext(api: .openGLES2) else {
return nil
}
let ciContext = CIContext(eaglContext: glContext)
guard let lookupFilter = CIFilter(lookupImage: lookupImage, dimension: 64) else {
return nil
}
lookupFilter.setValue(CIImage(cgImage: cgInputImage),
forKey: "inputImage")
guard let output = lookupFilter.outputImage else {
return nil
}
guard let cgOutputImage = ciContext.createCGImage(output, from: output.extent) else {
return nil
}
return UIImage(cgImage: cgOutputImage)
}
CIFilter+LUT.h
#import <CoreImage/CoreImage.h>
@import UIKit.UIImage;
@class CIFilter;
@interface CIFilter (LUT)
+(CIFilter *) filterWithLookupImage:(UIImage *)image dimension:(NSInteger) n;
@end
CIFilter+LUT.m
#import "CIFilter+LUT.h"
#import <CoreImage/CoreImage.h>
#import <OpenGLES/EAGL.h>
@implementation CIFilter (LUT)
+(CIFilter *)filterWithLookupImage:(UIImage *)image dimension:(NSInteger)n {
NSInteger width = CGImageGetWidth(image.CGImage);
NSInteger height = CGImageGetHeight(image.CGImage);
NSInteger rowNum = height / n;
NSInteger columnNum = width / n;
if ((width % n != 0) || (height % n != 0) || (rowNum * columnNum != n)) {
NSLog(@"Invalid colorLUT");
return nil;
}
unsigned char *bitmap = [self createRGBABitmapFromImage:image.CGImage];
if (bitmap == NULL) {
return nil;
}
NSInteger size = n * n * n * sizeof(float) * 4;
float *data = malloc(size);
int bitmapOffest = 0;
int z = 0;
for (int row = 0; row < rowNum; row++) {
for (int y = 0; y < n; y++) {
int tmp = z;
for (int col = 0; col < columnNum; col++) {
for (int x = 0; x < n; x++) {
float r = (unsigned int)bitmap[bitmapOffest];
float g = (unsigned int)bitmap[bitmapOffest + 1];
float b = (unsigned int)bitmap[bitmapOffest + 2];
float a = (unsigned int)bitmap[bitmapOffest + 3];
NSInteger dataOffset = (z*n*n + y*n + x) * 4;
data[dataOffset] = r / 255.0;
data[dataOffset + 1] = g / 255.0;
data[dataOffset + 2] = b / 255.0;
data[dataOffset + 3] = a / 255.0;
bitmapOffest += 4;
}
z++;
}
z = tmp;
}
z += columnNum;
}
free(bitmap);
CIFilter *filter = [CIFilter filterWithName:@"CIColorCube"];
[filter setValue:[NSData dataWithBytesNoCopy:data length:size freeWhenDone:YES] forKey:@"inputCubeData"];
[filter setValue:[NSNumber numberWithInteger:n] forKey:@"inputCubeDimension"];
return filter;
}
+ (unsigned char *)createRGBABitmapFromImage:(CGImageRef)image {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
unsigned char *bitmap;
NSInteger bitmapSize;
NSInteger bytesPerRow;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
bytesPerRow = (width * 4);
bitmapSize = (bytesPerRow * height);
bitmap = malloc(bitmapSize);
if (bitmap == NULL) {
return NULL;
}
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
free(bitmap);
return NULL;
}
context = CGBitmapContextCreate (bitmap, width, height, 8, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
free (bitmap);
}
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGContextRelease(context);
return bitmap;
}
@end