在我的绘图代码中没有调用 'dispatch_async' 的匹配函数
No matching function for call to 'dispatch_async' in my drawing code
我正尝试对我在 上发布的绘图代码执行 dispatch_async。我得到一个错误:“没有匹配的函数来调用 'dispatch_async'。我正在尝试做的,因为这是一个占用大量内存的操作,试图为在后台发生的渲染操作创建队列,当图像出现时已准备好放入主队列,因为 UI 更新过程在主线程中进行。所以大家在这个线程上指导我。我发布了整个代码。
#pragma mark Blurring the image
- (UIImage *)blurWithCoreImage:(UIImage *)sourceImage
{
// Set up output context.
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_async(queue, ^{
CIImage *inputImage = [CIImage imageWithCGImage:sourceImage.CGImage];
// Apply Affine-Clamp filter to stretch the image so that it does not
// look shrunken when gaussian blur is applied
CGAffineTransform transform = CGAffineTransformIdentity;
CIFilter *clampFilter = [CIFilter filterWithName:@"CIAffineClamp"];
[clampFilter setValue:inputImage forKey:@"inputImage"];
[clampFilter setValue:[NSValue valueWithBytes:&transform objCType:@encode(CGAffineTransform)] forKey:@"inputTransform"];
// Apply gaussian blur filter with radius of 30
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: @"CIGaussianBlur"];
[gaussianBlurFilter setValue:clampFilter.outputImage forKey: @"inputImage"];
[gaussianBlurFilter setValue:@10 forKey:@"inputRadius"]; //30
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:gaussianBlurFilter.outputImage fromRect:[inputImage extent]];
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef outputContext = UIGraphicsGetCurrentContext();
// Invert image coordinates
CGContextScaleCTM(outputContext, 1.0, -1.0);
CGContextTranslateCTM(outputContext, 0, -self.view.frame.size.height);
// Draw base image.
CGContextDrawImage(outputContext, self.view.frame, cgImage);
// Apply white tint
CGContextSaveGState(outputContext);
CGContextSetFillColorWithColor(outputContext, [UIColor colorWithWhite:1 alpha:0.2].CGColor);
CGContextFillRect(outputContext, self.view.frame);
CGContextRestoreGState(outputContext);
dispatch_async(dispatch_get_main_queue(), ^{
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
})
});
// Output image is ready.
}
它在这段代码 dispatch_async(dispatch_get_main_queue() 上抛出错误,即当我试图将它带回主线程时,因为 UI 在主线程中工作线程。我缺少什么?
我觉得你的代码看起来不错,但你使用的方式可能不对。
所以请像下面这样尝试
像下面这样创建一个方法
- (UIImage *)blurWithCoreImage:(UIImage *)sourceImage
{
// Set up output context.
// dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
// dispatch_async(queue, ^{
CIImage *inputImage = [CIImage imageWithCGImage:sourceImage.CGImage];
// Apply Affine-Clamp filter to stretch the image so that it does not
// look shrunken when gaussian blur is applied
CGAffineTransform transform = CGAffineTransformIdentity;
CIFilter *clampFilter = [CIFilter filterWithName:@"CIAffineClamp"];
[clampFilter setValue:inputImage forKey:@"inputImage"];
[clampFilter setValue:[NSValue valueWithBytes:&transform objCType:@encode(CGAffineTransform)] forKey:@"inputTransform"];
// Apply gaussian blur filter with radius of 30
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: @"CIGaussianBlur"];
[gaussianBlurFilter setValue:clampFilter.outputImage forKey: @"inputImage"];
[gaussianBlurFilter setValue:@10 forKey:@"inputRadius"]; //30
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:gaussianBlurFilter.outputImage fromRect:[inputImage extent]];
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef outputContext = UIGraphicsGetCurrentContext();
// Invert image coordinates
CGContextScaleCTM(outputContext, 1.0, -1.0);
CGContextTranslateCTM(outputContext, 0, -self.view.frame.size.height);
// Draw base image.
CGContextDrawImage(outputContext, self.view.frame, cgImage);
// Apply white tint
CGContextSaveGState(outputContext);
CGContextSetFillColorWithColor(outputContext, [UIColor colorWithWhite:1 alpha:0.2].CGColor);
CGContextFillRect(outputContext, self.view.frame);
CGContextRestoreGState(outputContext);
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
并像下面这样使用这个方法
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_async(queue, ^{
UIImage *img = [self blurWithCoreImage:[UIImage imageNamed:@"imagename.png"]];
dispatch_async(dispatch_get_main_queue(), ^{
[self.view addSubview:[[UIImageView alloc] initWithImage:img]];
});
});
我只是像下面那样尝试进行测试,它给了我正确的结果。试试看吧
如果您遇到任何问题,请告诉我,祝一切顺利
查看类似问题的答案:
Is this Core Graphics code thread safe?
您在一个线程上开始绘图,然后在另一个线程上完成它。那是一颗定时炸弹。
此外,在主线程上执行的 "return outputImage" 对您没有任何好处,因为没有人会收到 return 值。您应该在同一个线程中完成所有绘图,提取图像,然后在处理完整图像的主线程上调用一些东西。
我正尝试对我在 上发布的绘图代码执行 dispatch_async。我得到一个错误:“没有匹配的函数来调用 'dispatch_async'。我正在尝试做的,因为这是一个占用大量内存的操作,试图为在后台发生的渲染操作创建队列,当图像出现时已准备好放入主队列,因为 UI 更新过程在主线程中进行。所以大家在这个线程上指导我。我发布了整个代码。
#pragma mark Blurring the image
- (UIImage *)blurWithCoreImage:(UIImage *)sourceImage
{
// Set up output context.
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_async(queue, ^{
CIImage *inputImage = [CIImage imageWithCGImage:sourceImage.CGImage];
// Apply Affine-Clamp filter to stretch the image so that it does not
// look shrunken when gaussian blur is applied
CGAffineTransform transform = CGAffineTransformIdentity;
CIFilter *clampFilter = [CIFilter filterWithName:@"CIAffineClamp"];
[clampFilter setValue:inputImage forKey:@"inputImage"];
[clampFilter setValue:[NSValue valueWithBytes:&transform objCType:@encode(CGAffineTransform)] forKey:@"inputTransform"];
// Apply gaussian blur filter with radius of 30
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: @"CIGaussianBlur"];
[gaussianBlurFilter setValue:clampFilter.outputImage forKey: @"inputImage"];
[gaussianBlurFilter setValue:@10 forKey:@"inputRadius"]; //30
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:gaussianBlurFilter.outputImage fromRect:[inputImage extent]];
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef outputContext = UIGraphicsGetCurrentContext();
// Invert image coordinates
CGContextScaleCTM(outputContext, 1.0, -1.0);
CGContextTranslateCTM(outputContext, 0, -self.view.frame.size.height);
// Draw base image.
CGContextDrawImage(outputContext, self.view.frame, cgImage);
// Apply white tint
CGContextSaveGState(outputContext);
CGContextSetFillColorWithColor(outputContext, [UIColor colorWithWhite:1 alpha:0.2].CGColor);
CGContextFillRect(outputContext, self.view.frame);
CGContextRestoreGState(outputContext);
dispatch_async(dispatch_get_main_queue(), ^{
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
})
});
// Output image is ready.
}
它在这段代码 dispatch_async(dispatch_get_main_queue() 上抛出错误,即当我试图将它带回主线程时,因为 UI 在主线程中工作线程。我缺少什么?
我觉得你的代码看起来不错,但你使用的方式可能不对。 所以请像下面这样尝试
像下面这样创建一个方法
- (UIImage *)blurWithCoreImage:(UIImage *)sourceImage
{
// Set up output context.
// dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
// dispatch_async(queue, ^{
CIImage *inputImage = [CIImage imageWithCGImage:sourceImage.CGImage];
// Apply Affine-Clamp filter to stretch the image so that it does not
// look shrunken when gaussian blur is applied
CGAffineTransform transform = CGAffineTransformIdentity;
CIFilter *clampFilter = [CIFilter filterWithName:@"CIAffineClamp"];
[clampFilter setValue:inputImage forKey:@"inputImage"];
[clampFilter setValue:[NSValue valueWithBytes:&transform objCType:@encode(CGAffineTransform)] forKey:@"inputTransform"];
// Apply gaussian blur filter with radius of 30
CIFilter *gaussianBlurFilter = [CIFilter filterWithName: @"CIGaussianBlur"];
[gaussianBlurFilter setValue:clampFilter.outputImage forKey: @"inputImage"];
[gaussianBlurFilter setValue:@10 forKey:@"inputRadius"]; //30
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:gaussianBlurFilter.outputImage fromRect:[inputImage extent]];
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef outputContext = UIGraphicsGetCurrentContext();
// Invert image coordinates
CGContextScaleCTM(outputContext, 1.0, -1.0);
CGContextTranslateCTM(outputContext, 0, -self.view.frame.size.height);
// Draw base image.
CGContextDrawImage(outputContext, self.view.frame, cgImage);
// Apply white tint
CGContextSaveGState(outputContext);
CGContextSetFillColorWithColor(outputContext, [UIColor colorWithWhite:1 alpha:0.2].CGColor);
CGContextFillRect(outputContext, self.view.frame);
CGContextRestoreGState(outputContext);
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
并像下面这样使用这个方法
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_async(queue, ^{
UIImage *img = [self blurWithCoreImage:[UIImage imageNamed:@"imagename.png"]];
dispatch_async(dispatch_get_main_queue(), ^{
[self.view addSubview:[[UIImageView alloc] initWithImage:img]];
});
});
我只是像下面那样尝试进行测试,它给了我正确的结果。试试看吧
如果您遇到任何问题,请告诉我,祝一切顺利
查看类似问题的答案:
Is this Core Graphics code thread safe?
您在一个线程上开始绘图,然后在另一个线程上完成它。那是一颗定时炸弹。
此外,在主线程上执行的 "return outputImage" 对您没有任何好处,因为没有人会收到 return 值。您应该在同一个线程中完成所有绘图,提取图像,然后在处理完整图像的主线程上调用一些东西。