如何防止 NSBitmapImageRep 创建大量中间 CGImages?
How to keep NSBitmapImageRep from creating lots of intermediate CGImages?
我有一个生成艺术应用程序,它从一小组点开始,将它们向外扩展,并检查扩展以确保它不与任何东西相交。我的第一个天真的实现是在主 UI 线程上完成所有操作,并产生预期的结果。随着大小的增长,有更多的点需要检查,它会变慢并最终阻塞 UI.
我做了显而易见的事情并将计算移至另一个线程,以便 UI 可以保持响应。这有帮助,但只有一点点。我通过用一个 NSBitmapImageRep
包裹一个 NSGraphicsContext
来实现这一点,这样我就可以绘制它。但是我需要确保我不会试图在主 UI 线程上将它绘制到屏幕上,同时我也在后台线程上绘制它。所以我引入了一个锁。随着数据变大,绘图也可能需要很长时间,所以即使这样也是有问题的。
我的最新版本有 2 NSBitmapImageRep
s。一个持有最近绘制的版本,并在视图需要更新时被绘制到屏幕上。另一个被绘制到后台线程上。当后台线程上的绘图完成后,它会被复制到另一个线程上。我通过获取每个像素的基地址并简单地调用 memcpy()
来实际将像素从一个像素移动到另一个像素来进行复制。 (我尝试交换它们而不是复制它们,但即使绘图以调用 [-NSGraphicsContext flushContext]
结束,我得到的是绘制到 window 的部分绘制结果。)
计算线程如下所示:
BOOL done = NO;
while (!done)
{
self->model->lockBranches();
self->model->iterate();
done = (!self->model->moreToDivide()) || (!self->keepIterating);
self->model->unlockBranches();
[self drawIntoOffscreen];
dispatch_async(dispatch_get_main_queue(), ^{
self.needsDisplay = YES;
});
}
这足以让 UI 保持响应。但是,每次我将绘制的图像复制到 blitting 图像中时,我都会调用 [-NSBitmapImageRep baseAddress]
。查看仪器中的内存配置文件,每次调用该函数都会导致创建 CGImage
。此外,CGImage
在计算完成之前不会被释放,这可能需要几分钟。这会导致内存变得非常大。我在我的过程中看到了大约 3-4 G 的 CGImages,尽管我从来不需要超过 2 个。计算完成并清空缓存后,我的应用程序的内存下降到只有 350-500 MB。我没有想过为此在计算循环中使用自动释放池,但会试一试。
OS 似乎正在缓存它创建的图像。但是,在计算完成之前它不会清除缓存,因此它会在计算完成之前不受限制地增长。有什么办法可以防止这种情况发生吗?
不要使用-bitmapData
和memcpy()
复制图片。将一幅图像绘制到另一幅图像中。
我经常建议开发人员阅读 10.6 AppKit release notes 中的 "NSBitmapImageRep: CoreGraphics impedance matching and performance notes" 部分:
NSBitmapImageRep: CoreGraphics impedance matching and performance notes
Release notes above detail core changes at the NSImage level for
SnowLeopard. There are also substantial changes at the
NSBitmapImageRep level, also for performance and to improve impedance
matching with CoreGraphics.
NSImage is a fairly abstract representation of an image. It's pretty
much just a thing-that-can-draw, though it's less abstract than NSView
in that it should not behave differently based aspects of the context
it's drawn into except for quality decisions. That's kind of an opaque
statement, but it can be illustrated with an example: If you draw a
button into a 100x22 region vs a 22x22 region, you can expect the
button to stretch its middle but not its end caps. An image should not
behave that way (and if you try it, you'll probably break!). An image
should always linearly and uniformly scale to fill the rect in which
its drawn, though it may choose representations and such to optimize
quality for that region. Similarly, all the image representations in
an NSImage should represent the same drawing. Don't pack some totally
different image in as a rep.
That digression past us, an NSBitmapImageRep is a much more concrete
object. An NSImage does not have pixels, an NSBitmapImageRep does. An
NSBitmapImageRep is a chunk of data together with pixel format
information and colorspace information that allows us to interpret the
data as a rectangular array of color values.
That's the same, pretty much, as a CGImage. In SnowLeopard an
NSBitmapImageRep is natively backed by a CGImageRef, as opposed to
directly a chunk of data. The CGImageRef really has the chunk of data.
While in Leopard an NSBitmapImageRep instantiated from a CGImage would
unpack and possibly process the data (which happens when reading from
a bitmap file format), in SnowLeopard we try hard to just hang onto
the original CGImage.
This has some performance consequences. Most are good! You should see
less encoding and decoding of bitmap data as CGImages. If you
initialize a NSImage from a JPEG file, then draw it in a PDF, you
should get a PDF of the same file size as the original JPEG. In
Leopard you'd see a PDF the size of the decompressed image. To take
another example, CoreGraphics caches, including uploads to the
graphics card, are tied to CGImage instances, so the more the same
instance can be used the better.
However: To some extent, the operations that are fast with
NSBitmapImageRep have changed. CGImages are not mutable,
NSBitmapImageRep is. If you modify an NSBitmapImageRep, internally it
will likely have to copy the data out of a CGImage, incorporate your
changes, and repack it as a new CGImage. So, basically, drawing
NSBitmapImageRep is fast, looking at or modifying its pixel data is
not. This was true in Leopard, but it's more true now.
The above steps do happen lazily: If you do something that causes
NSBitmapImageRep to copy data out of its backing CGImageRef (like call
bitmapData), the bitmap will not repack the data as a CGImageRef until
it is drawn or until it needs a CGImage for some other reason. So,
certainly accessing the data is not the end of the world, and is the
right thing to do in some circumstances, but in general you should be
thinking about drawing instead. If you think you want to work with
pixels, take a look at CoreImage instead - that's the API in our
system that is truly intended for pixel processing.
This coincides with safety. A problem we've seen with our SnowLeopard
changes is that apps are rather fond of hardcoding bitmap formats. An
NSBitmapImageRep could be 8, 32, or 128 bits per pixel, it could be
floating point or not, it could be premultiplied or not, it might or
might not have an alpha channel, etc. These aspects are specified with
bitmap properties, like -bitmapFormat. Unfortunately, if someone wants
to extract the bitmapData from an NSBitmapImageRep instance, they
typically just call bitmapData, treat the data as (say) premultiplied
32 bit per pixel RGBA, and if it seems to work, call it a day.
Now that NSBitmapImageRep is not processing data as much as it used
to, random bitmap image reps you may get ahold of may have different
formats than they used to. Some of those hardcoded formats might be
wrong.
The solution is not to try to handle the complete range of formats
that NSBitmapImageRep's data might be in, that's way too hard.
Instead, draw the bitmap into something whose format you know, then
look at that.
That looks like this:
NSBItmapImageRep *bitmapIGotFromAPIThatDidNotSpecifyFormat;
NSBitmapImageRep *bitmapWhoseFormatIKnow = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide:width pixelsHigh:height
bitsPerSample:bps samplesPerPixel:spp hasAlpha:alpha isPlanar:isPlanar
colorSpaceName:colorSpaceName bitmapFormat:bitmapFormat bytesPerRow:rowBytes
bitsPerPixel:pixelBits];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:bitmapWhoseFormatIKnow]];
[bitmapIGotFromAPIThatDidNotSpecifyFormat draw];
[NSGraphicsContext restoreGraphicsState];
unsigned char *bitmapDataIUnderstand = [bitmapWhoseFormatIKnow bitmapData];
This produces no more copies of the data than just accessing
bitmapData of bitmapIGotFromAPIThatDidNotSpecifyFormat, since that
data would need to be copied out of a backing CGImage anyway. Also
note that this doesn't depend on the source drawing being a bitmap.
This is a way to get pixels in a known format for any drawing, or just
to get a bitmap. This is a much better way to get a bitmap than
calling -TIFFRepresentation, for example. It's also better than
locking focus on an NSImage and using -[NSBitmapImageRep
initWithFocusedViewRect:].
So, to sum up: (1) Drawing is fast. Playing with pixels is not. (2) If
you think you need to play with pixels, (a) consider if there's a way
to do it with drawing or (b) look into CoreImage. (3) If you still
want to get at the pixels, draw into a bitmap whose format you know
and look at those pixels.
事实上,最好从标题相似的较早部分开始 — "NSImage, CGImage, and CoreGraphics impedance matching" — 然后通读到后面的部分。
顺便说一句,交换图像代表很有可能会奏效,但您只是没有正确同步它们。您必须显示两个代表使用的代码,以便我们确定。
我有一个生成艺术应用程序,它从一小组点开始,将它们向外扩展,并检查扩展以确保它不与任何东西相交。我的第一个天真的实现是在主 UI 线程上完成所有操作,并产生预期的结果。随着大小的增长,有更多的点需要检查,它会变慢并最终阻塞 UI.
我做了显而易见的事情并将计算移至另一个线程,以便 UI 可以保持响应。这有帮助,但只有一点点。我通过用一个 NSBitmapImageRep
包裹一个 NSGraphicsContext
来实现这一点,这样我就可以绘制它。但是我需要确保我不会试图在主 UI 线程上将它绘制到屏幕上,同时我也在后台线程上绘制它。所以我引入了一个锁。随着数据变大,绘图也可能需要很长时间,所以即使这样也是有问题的。
我的最新版本有 2 NSBitmapImageRep
s。一个持有最近绘制的版本,并在视图需要更新时被绘制到屏幕上。另一个被绘制到后台线程上。当后台线程上的绘图完成后,它会被复制到另一个线程上。我通过获取每个像素的基地址并简单地调用 memcpy()
来实际将像素从一个像素移动到另一个像素来进行复制。 (我尝试交换它们而不是复制它们,但即使绘图以调用 [-NSGraphicsContext flushContext]
结束,我得到的是绘制到 window 的部分绘制结果。)
计算线程如下所示:
BOOL done = NO;
while (!done)
{
self->model->lockBranches();
self->model->iterate();
done = (!self->model->moreToDivide()) || (!self->keepIterating);
self->model->unlockBranches();
[self drawIntoOffscreen];
dispatch_async(dispatch_get_main_queue(), ^{
self.needsDisplay = YES;
});
}
这足以让 UI 保持响应。但是,每次我将绘制的图像复制到 blitting 图像中时,我都会调用 [-NSBitmapImageRep baseAddress]
。查看仪器中的内存配置文件,每次调用该函数都会导致创建 CGImage
。此外,CGImage
在计算完成之前不会被释放,这可能需要几分钟。这会导致内存变得非常大。我在我的过程中看到了大约 3-4 G 的 CGImages,尽管我从来不需要超过 2 个。计算完成并清空缓存后,我的应用程序的内存下降到只有 350-500 MB。我没有想过为此在计算循环中使用自动释放池,但会试一试。
OS 似乎正在缓存它创建的图像。但是,在计算完成之前它不会清除缓存,因此它会在计算完成之前不受限制地增长。有什么办法可以防止这种情况发生吗?
不要使用-bitmapData
和memcpy()
复制图片。将一幅图像绘制到另一幅图像中。
我经常建议开发人员阅读 10.6 AppKit release notes 中的 "NSBitmapImageRep: CoreGraphics impedance matching and performance notes" 部分:
NSBitmapImageRep: CoreGraphics impedance matching and performance notes
Release notes above detail core changes at the NSImage level for SnowLeopard. There are also substantial changes at the NSBitmapImageRep level, also for performance and to improve impedance matching with CoreGraphics.
NSImage is a fairly abstract representation of an image. It's pretty much just a thing-that-can-draw, though it's less abstract than NSView in that it should not behave differently based aspects of the context it's drawn into except for quality decisions. That's kind of an opaque statement, but it can be illustrated with an example: If you draw a button into a 100x22 region vs a 22x22 region, you can expect the button to stretch its middle but not its end caps. An image should not behave that way (and if you try it, you'll probably break!). An image should always linearly and uniformly scale to fill the rect in which its drawn, though it may choose representations and such to optimize quality for that region. Similarly, all the image representations in an NSImage should represent the same drawing. Don't pack some totally different image in as a rep.
That digression past us, an NSBitmapImageRep is a much more concrete object. An NSImage does not have pixels, an NSBitmapImageRep does. An NSBitmapImageRep is a chunk of data together with pixel format information and colorspace information that allows us to interpret the data as a rectangular array of color values.
That's the same, pretty much, as a CGImage. In SnowLeopard an NSBitmapImageRep is natively backed by a CGImageRef, as opposed to directly a chunk of data. The CGImageRef really has the chunk of data. While in Leopard an NSBitmapImageRep instantiated from a CGImage would unpack and possibly process the data (which happens when reading from a bitmap file format), in SnowLeopard we try hard to just hang onto the original CGImage.
This has some performance consequences. Most are good! You should see less encoding and decoding of bitmap data as CGImages. If you initialize a NSImage from a JPEG file, then draw it in a PDF, you should get a PDF of the same file size as the original JPEG. In Leopard you'd see a PDF the size of the decompressed image. To take another example, CoreGraphics caches, including uploads to the graphics card, are tied to CGImage instances, so the more the same instance can be used the better.
However: To some extent, the operations that are fast with NSBitmapImageRep have changed. CGImages are not mutable, NSBitmapImageRep is. If you modify an NSBitmapImageRep, internally it will likely have to copy the data out of a CGImage, incorporate your changes, and repack it as a new CGImage. So, basically, drawing NSBitmapImageRep is fast, looking at or modifying its pixel data is not. This was true in Leopard, but it's more true now.
The above steps do happen lazily: If you do something that causes NSBitmapImageRep to copy data out of its backing CGImageRef (like call bitmapData), the bitmap will not repack the data as a CGImageRef until it is drawn or until it needs a CGImage for some other reason. So, certainly accessing the data is not the end of the world, and is the right thing to do in some circumstances, but in general you should be thinking about drawing instead. If you think you want to work with pixels, take a look at CoreImage instead - that's the API in our system that is truly intended for pixel processing.
This coincides with safety. A problem we've seen with our SnowLeopard changes is that apps are rather fond of hardcoding bitmap formats. An NSBitmapImageRep could be 8, 32, or 128 bits per pixel, it could be floating point or not, it could be premultiplied or not, it might or might not have an alpha channel, etc. These aspects are specified with bitmap properties, like -bitmapFormat. Unfortunately, if someone wants to extract the bitmapData from an NSBitmapImageRep instance, they typically just call bitmapData, treat the data as (say) premultiplied 32 bit per pixel RGBA, and if it seems to work, call it a day.
Now that NSBitmapImageRep is not processing data as much as it used to, random bitmap image reps you may get ahold of may have different formats than they used to. Some of those hardcoded formats might be wrong.
The solution is not to try to handle the complete range of formats that NSBitmapImageRep's data might be in, that's way too hard. Instead, draw the bitmap into something whose format you know, then look at that.
That looks like this:
NSBItmapImageRep *bitmapIGotFromAPIThatDidNotSpecifyFormat; NSBitmapImageRep *bitmapWhoseFormatIKnow = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide:width pixelsHigh:height bitsPerSample:bps samplesPerPixel:spp hasAlpha:alpha isPlanar:isPlanar colorSpaceName:colorSpaceName bitmapFormat:bitmapFormat bytesPerRow:rowBytes bitsPerPixel:pixelBits]; [NSGraphicsContext saveGraphicsState]; [NSGraphicsContext setContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:bitmapWhoseFormatIKnow]]; [bitmapIGotFromAPIThatDidNotSpecifyFormat draw]; [NSGraphicsContext restoreGraphicsState]; unsigned char *bitmapDataIUnderstand = [bitmapWhoseFormatIKnow bitmapData];
This produces no more copies of the data than just accessing bitmapData of bitmapIGotFromAPIThatDidNotSpecifyFormat, since that data would need to be copied out of a backing CGImage anyway. Also note that this doesn't depend on the source drawing being a bitmap. This is a way to get pixels in a known format for any drawing, or just to get a bitmap. This is a much better way to get a bitmap than calling -TIFFRepresentation, for example. It's also better than locking focus on an NSImage and using -[NSBitmapImageRep initWithFocusedViewRect:].
So, to sum up: (1) Drawing is fast. Playing with pixels is not. (2) If you think you need to play with pixels, (a) consider if there's a way to do it with drawing or (b) look into CoreImage. (3) If you still want to get at the pixels, draw into a bitmap whose format you know and look at those pixels.
事实上,最好从标题相似的较早部分开始 — "NSImage, CGImage, and CoreGraphics impedance matching" — 然后通读到后面的部分。
顺便说一句,交换图像代表很有可能会奏效,但您只是没有正确同步它们。您必须显示两个代表使用的代码,以便我们确定。