从 MTLTexture 制作 CGImage 时内存泄漏 (Swift, macOS)
Memory leak when making CGImage from MTLTexture (Swift, macOS)
我有一个 Metal 应用程序,我正在尝试将帧导出到 quicktime 电影。我正在以超高分辨率渲染帧,然后在写入之前将它们缩小,以消除场景锯齿。
为了缩放它,我采用高分辨率纹理并将其转换为 CGImage,然后我调整图像大小并写出较小的版本。我在网上找到了这个扩展,用于将 MTLTexture 转换为 CGImage:
extension MTLTexture {
func bytes() -> UnsafeMutableRawPointer {
let width = self.width
let height = self.height
let rowBytes = self.width * 4
let p = malloc(width * height * 4)
self.getBytes(p!, bytesPerRow: rowBytes, from: MTLRegionMake2D(0, 0, width, height), mipmapLevel: 0)
return p!
}
func toImage() -> CGImage? {
let p = bytes()
let pColorSpace = CGColorSpaceCreateDeviceRGB()
let rawBitmapInfo = CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue // noneSkipFirst
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: rawBitmapInfo)
let size = self.width * self.height * 4
let rowBytes = self.width * 4
let releaseMaskImagePixelData: CGDataProviderReleaseDataCallback = { (info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
// https://developer.apple.com/reference/coregraphics/cgdataproviderreleasedatacallback
// N.B. 'CGDataProviderRelease' is unavailable: Core Foundation objects are automatically memory managed
return
}
if let provider = CGDataProvider(dataInfo: nil, data: p, size: size, releaseData: releaseMaskImagePixelData) {
let cgImageRef = CGImage(width: self.width, height: self.height, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: rowBytes, space: pColorSpace, bitmapInfo: bitmapInfo, provider: provider, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)!
p.deallocate() //this fixes the memory leak
return cgImageRef
}
p.deallocate() //this fixes the memory leak
return nil
}
} // end extension
我不是肯定的,但似乎此函数中的某些内容导致了内存泄漏——它在每一帧都持有巨大纹理/cgimage 中的内存量而不释放它。
CGDataProvider 初始化采用 'releaseData' 回调参数,但我的印象是不再需要它。
我还有一个 CGImage 的调整大小扩展——这也可能导致泄漏,我不知道。但是,我可以注释掉框架的大小调整和写入,内存泄漏仍然会累积,所以在我看来,转换为 CGImage 是主要问题。
extension CGImage {
func resize(_ scale:Float) -> CGImage? {
let imageWidth = Float(width)
let imageHeight = Float(height)
let w = Int(imageWidth * scale)
let h = Int(imageHeight * scale)
guard let colorSpace = colorSpace else { return nil }
guard let context = CGContext(data: nil, width: w, height: h, bitsPerComponent: bitsPerComponent, bytesPerRow: Int(Float(bytesPerRow)*scale), space: colorSpace, bitmapInfo: alphaInfo.rawValue) else { return nil }
// draw image to context (resizing it)
context.interpolationQuality = .high
let r = CGRect(x: 0, y: 0, width: w, height: h)
context.clear(r)
context.draw(self, in:r)
// extract resulting image from context
return context.makeImage()
}
}
最后,这是我在导出时调用每一帧的大函数。对于篇幅,我很抱歉,但提供太多信息可能比提供太少信息要好。所以,基本上在渲染开始时,我分配了一个巨大的 MTL 纹理 ('exportTextureBig'),我的正常屏幕的大小乘以每个方向的 'zoom_subvisions'。我以块的形式渲染场景,网格上的每个点一个,并通过使用 blitCommandEncoder.copy() 将每个小块复制到大纹理上来 assemble 大帧。填满整个框架后,我会尝试从中制作一个 CGImage,将其缩小到另一个 CGImage,然后写出来。
我在导出时每帧都调用 commandBuffer.waitUntilCompleted() -- 希望避免让渲染器保留它仍在使用的纹理。
func exportFrame2(_ commandBuffer:MTLCommandBuffer, _ texture:MTLTexture) { // texture is the offscreen render target for the screen-size chunks
if zoom_index < zoom_subdivisions*zoom_subdivisions { // copy screen-size chunk to large texture
if let blitCommandEncoder = commandBuffer.makeBlitCommandEncoder() {
let dx = Int(BigRender.globals_L.displaySize.x) * (zoom_index%zoom_subdivisions)
let dy = Int(BigRender.globals_L.displaySize.y) * (zoom_index/zoom_subdivisions)
blitCommandEncoder.copy(from:texture,
sourceSlice: 0,
sourceLevel: 0,
sourceOrigin: MTLOrigin(x:0,y:0,z:0),
sourceSize: MTLSize(width:Int(BigRender.globals_L.displaySize.x),height:Int(BigRender.globals_L.displaySize.y), depth:1),
to:BigVideoWriter!.exportTextureBig!,
destinationSlice: 0,
destinationLevel: 0,
destinationOrigin: MTLOrigin(x:dx,y:dy,z:0))
blitCommandEncoder.synchronize(resource: BigVideoWriter!.exportTextureBig!)
blitCommandEncoder.endEncoding()
}
}
commandBuffer.commit()
commandBuffer.waitUntilCompleted() // do this instead
// is big frame complete?
if (zoom_index == zoom_subdivisions*zoom_subdivisions-1) {
// shrink the big texture here
if let cgImage = self.exportTextureBig!.toImage() { // memory leak here?
// this can be commented out and memory leak still happens
if let smallImage = cgImage.resize(1.0/Float(zoom_subdivisions)) {
writeFrame(nil, smallImage)
}
}
}
}
这一切都有效,除了巨大的内存泄漏。我可以做些什么来让它在每一帧中释放 cgImage 数据吗?为什么它坚持下去?
非常感谢任何建议!
我认为您误解了 CGDataProviderReleaseDataCallback
和 CGDataProviderRelease()
不可用的问题。
CGDataProviderRelease()
(在 C 中)用于释放 CGDataProvider
对象本身。但这与您在创建 CGDataProvider
时提供给 CGDataProvider
的字节缓冲区不同。
在 Swift 中,CGDataProvider
对象的生命周期是为您管理的,但这无助于释放字节缓冲区。
理想情况下,CGDataProvider
能够自动管理字节缓冲区的生命周期,但它不能。 CGDataProvider
不知道如何释放该字节缓冲区,因为它不知道它是如何分配的。这就是为什么您必须提供可用于释放它的回调。您实质上是在提供有关如何释放字节缓冲区的知识。
由于您正在使用 malloc()
分配字节缓冲区,因此您的回调需要 free()
它。
也就是说,您最好使用 CFMutableData
而不是 UnsafeMutableRawPointer
。然后,使用 CGDataProvider(data:)
创建数据提供者。在这种情况下,所有内存都是为您管理的。
我使用的代码非常相似,一旦我添加了解除分配 P 的代码,问题就解决了:
func toImage() -> CGImage? {
let p = bytes()
let pColorSpace = CGColorSpaceCreateDeviceRGB()
let rawBitmapInfo = CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue // noneSkipFirst
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: rawBitmapInfo)
let size = self.width * self.height * 4
let rowBytes = self.width * 4
let releaseMaskImagePixelData: CGDataProviderReleaseDataCallback = { (info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
// https://developer.apple.com/reference/coregraphics/cgdataproviderreleasedatacallback
// N.B. 'CGDataProviderRelease' is unavailable: Core Foundation objects are automatically memory managed
return
}
if let provider = CGDataProvider(dataInfo: nil, data: p, size: size, releaseData: releaseMaskImagePixelData) {
let cgImageRef = CGImage(width: self.width, height: self.height, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: rowBytes, space: pColorSpace, bitmapInfo: bitmapInfo, provider: provider, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)!
p.deallocate() //this fixes the memory leak
return cgImageRef
}
p.deallocate() //this fixes the memory leak, but the data provider is no longer available (you just deallocated it's backing store)
return nil
}
任何需要快速使用CGImage的地方
autoreleasepool {
let lastDrawableDisplayed = self.metalView?.currentDrawable?.texture
let cgImage = lastDrawableDisplayed?.toImage() // your code to convert drawable to CGImage
// do work with cgImage
}
我有一个 Metal 应用程序,我正在尝试将帧导出到 quicktime 电影。我正在以超高分辨率渲染帧,然后在写入之前将它们缩小,以消除场景锯齿。
为了缩放它,我采用高分辨率纹理并将其转换为 CGImage,然后我调整图像大小并写出较小的版本。我在网上找到了这个扩展,用于将 MTLTexture 转换为 CGImage:
extension MTLTexture {
func bytes() -> UnsafeMutableRawPointer {
let width = self.width
let height = self.height
let rowBytes = self.width * 4
let p = malloc(width * height * 4)
self.getBytes(p!, bytesPerRow: rowBytes, from: MTLRegionMake2D(0, 0, width, height), mipmapLevel: 0)
return p!
}
func toImage() -> CGImage? {
let p = bytes()
let pColorSpace = CGColorSpaceCreateDeviceRGB()
let rawBitmapInfo = CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue // noneSkipFirst
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: rawBitmapInfo)
let size = self.width * self.height * 4
let rowBytes = self.width * 4
let releaseMaskImagePixelData: CGDataProviderReleaseDataCallback = { (info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
// https://developer.apple.com/reference/coregraphics/cgdataproviderreleasedatacallback
// N.B. 'CGDataProviderRelease' is unavailable: Core Foundation objects are automatically memory managed
return
}
if let provider = CGDataProvider(dataInfo: nil, data: p, size: size, releaseData: releaseMaskImagePixelData) {
let cgImageRef = CGImage(width: self.width, height: self.height, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: rowBytes, space: pColorSpace, bitmapInfo: bitmapInfo, provider: provider, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)!
p.deallocate() //this fixes the memory leak
return cgImageRef
}
p.deallocate() //this fixes the memory leak
return nil
}
} // end extension
我不是肯定的,但似乎此函数中的某些内容导致了内存泄漏——它在每一帧都持有巨大纹理/cgimage 中的内存量而不释放它。
CGDataProvider 初始化采用 'releaseData' 回调参数,但我的印象是不再需要它。
我还有一个 CGImage 的调整大小扩展——这也可能导致泄漏,我不知道。但是,我可以注释掉框架的大小调整和写入,内存泄漏仍然会累积,所以在我看来,转换为 CGImage 是主要问题。
extension CGImage {
func resize(_ scale:Float) -> CGImage? {
let imageWidth = Float(width)
let imageHeight = Float(height)
let w = Int(imageWidth * scale)
let h = Int(imageHeight * scale)
guard let colorSpace = colorSpace else { return nil }
guard let context = CGContext(data: nil, width: w, height: h, bitsPerComponent: bitsPerComponent, bytesPerRow: Int(Float(bytesPerRow)*scale), space: colorSpace, bitmapInfo: alphaInfo.rawValue) else { return nil }
// draw image to context (resizing it)
context.interpolationQuality = .high
let r = CGRect(x: 0, y: 0, width: w, height: h)
context.clear(r)
context.draw(self, in:r)
// extract resulting image from context
return context.makeImage()
}
}
最后,这是我在导出时调用每一帧的大函数。对于篇幅,我很抱歉,但提供太多信息可能比提供太少信息要好。所以,基本上在渲染开始时,我分配了一个巨大的 MTL 纹理 ('exportTextureBig'),我的正常屏幕的大小乘以每个方向的 'zoom_subvisions'。我以块的形式渲染场景,网格上的每个点一个,并通过使用 blitCommandEncoder.copy() 将每个小块复制到大纹理上来 assemble 大帧。填满整个框架后,我会尝试从中制作一个 CGImage,将其缩小到另一个 CGImage,然后写出来。
我在导出时每帧都调用 commandBuffer.waitUntilCompleted() -- 希望避免让渲染器保留它仍在使用的纹理。
func exportFrame2(_ commandBuffer:MTLCommandBuffer, _ texture:MTLTexture) { // texture is the offscreen render target for the screen-size chunks
if zoom_index < zoom_subdivisions*zoom_subdivisions { // copy screen-size chunk to large texture
if let blitCommandEncoder = commandBuffer.makeBlitCommandEncoder() {
let dx = Int(BigRender.globals_L.displaySize.x) * (zoom_index%zoom_subdivisions)
let dy = Int(BigRender.globals_L.displaySize.y) * (zoom_index/zoom_subdivisions)
blitCommandEncoder.copy(from:texture,
sourceSlice: 0,
sourceLevel: 0,
sourceOrigin: MTLOrigin(x:0,y:0,z:0),
sourceSize: MTLSize(width:Int(BigRender.globals_L.displaySize.x),height:Int(BigRender.globals_L.displaySize.y), depth:1),
to:BigVideoWriter!.exportTextureBig!,
destinationSlice: 0,
destinationLevel: 0,
destinationOrigin: MTLOrigin(x:dx,y:dy,z:0))
blitCommandEncoder.synchronize(resource: BigVideoWriter!.exportTextureBig!)
blitCommandEncoder.endEncoding()
}
}
commandBuffer.commit()
commandBuffer.waitUntilCompleted() // do this instead
// is big frame complete?
if (zoom_index == zoom_subdivisions*zoom_subdivisions-1) {
// shrink the big texture here
if let cgImage = self.exportTextureBig!.toImage() { // memory leak here?
// this can be commented out and memory leak still happens
if let smallImage = cgImage.resize(1.0/Float(zoom_subdivisions)) {
writeFrame(nil, smallImage)
}
}
}
}
这一切都有效,除了巨大的内存泄漏。我可以做些什么来让它在每一帧中释放 cgImage 数据吗?为什么它坚持下去?
非常感谢任何建议!
我认为您误解了 CGDataProviderReleaseDataCallback
和 CGDataProviderRelease()
不可用的问题。
CGDataProviderRelease()
(在 C 中)用于释放 CGDataProvider
对象本身。但这与您在创建 CGDataProvider
时提供给 CGDataProvider
的字节缓冲区不同。
在 Swift 中,CGDataProvider
对象的生命周期是为您管理的,但这无助于释放字节缓冲区。
理想情况下,CGDataProvider
能够自动管理字节缓冲区的生命周期,但它不能。 CGDataProvider
不知道如何释放该字节缓冲区,因为它不知道它是如何分配的。这就是为什么您必须提供可用于释放它的回调。您实质上是在提供有关如何释放字节缓冲区的知识。
由于您正在使用 malloc()
分配字节缓冲区,因此您的回调需要 free()
它。
也就是说,您最好使用 CFMutableData
而不是 UnsafeMutableRawPointer
。然后,使用 CGDataProvider(data:)
创建数据提供者。在这种情况下,所有内存都是为您管理的。
我使用的代码非常相似,一旦我添加了解除分配 P 的代码,问题就解决了:
func toImage() -> CGImage? {
let p = bytes()
let pColorSpace = CGColorSpaceCreateDeviceRGB()
let rawBitmapInfo = CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue // noneSkipFirst
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: rawBitmapInfo)
let size = self.width * self.height * 4
let rowBytes = self.width * 4
let releaseMaskImagePixelData: CGDataProviderReleaseDataCallback = { (info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
// https://developer.apple.com/reference/coregraphics/cgdataproviderreleasedatacallback
// N.B. 'CGDataProviderRelease' is unavailable: Core Foundation objects are automatically memory managed
return
}
if let provider = CGDataProvider(dataInfo: nil, data: p, size: size, releaseData: releaseMaskImagePixelData) {
let cgImageRef = CGImage(width: self.width, height: self.height, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: rowBytes, space: pColorSpace, bitmapInfo: bitmapInfo, provider: provider, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)!
p.deallocate() //this fixes the memory leak
return cgImageRef
}
p.deallocate() //this fixes the memory leak, but the data provider is no longer available (you just deallocated it's backing store)
return nil
}
任何需要快速使用CGImage的地方
autoreleasepool {
let lastDrawableDisplayed = self.metalView?.currentDrawable?.texture
let cgImage = lastDrawableDisplayed?.toImage() // your code to convert drawable to CGImage
// do work with cgImage
}