OpenGL ES 纹理质量下降
OpenGL ES texture degrades in quality
我正在尝试将 (以屏幕分辨率) 生成的 Core Graphics 图像绘制到 OpenGL 中。然而,图像渲染比 CG 输出更锯齿(在 CG 中禁用抗锯齿)。 文字为贴图(蓝色背景分别是第一张在Core Graphics中绘制,第二张在OpenGL中绘制)
CG输出:
OpenGL 渲染(在模拟器中):
帧缓冲区设置:
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:context];
glGenRenderbuffers(1, &onscrRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, onscrRenderBuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:self.layer];
glGenFramebuffers(1, &onscrFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, onscrFramebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, onscrRenderBuffer);
纹理加载代码:
-(GLuint) loadTextureFromImage:(UIImage*)image {
CGImageRef textureImage = image.CGImage;
size_t width = CGImageGetWidth(textureImage);
size_t height = CGImageGetHeight(textureImage);
GLubyte* spriteData = (GLubyte*) malloc(width*height*4);
CGColorSpaceRef cs = CGImageGetColorSpace(textureImage);
CGContextRef c = CGBitmapContextCreate(spriteData, width, height, 8, width*4, cs, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(cs);
CGContextScaleCTM(c, 1, -1);
CGContextTranslateCTM(c, 0, -CGContextGetClipBoundingBox(c).size.height);
CGContextDrawImage(c, (CGRect){CGPointZero, {width, height}}, textureImage);
CGContextRelease(c);
GLuint glTex;
glGenTextures(1, &glTex);
glBindTexture(GL_TEXTURE_2D, glTex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (GLsizei)width, (GLsizei)height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
glBindTexture(GL_TEXTURE_2D, 0);
free(spriteData);
return glTex;
}
顶点:
struct vertex {
float position[3];
float color[4];
float texCoord[2];
};
typedef struct vertex vertex;
const vertex bgVertices[] = {
{{1, -1, 0}, {0, 167.0/255.0, 253.0/255.0, 1}, {1, 0}}, // BR (0)
{{1, 1, 0}, {0, 222.0/255.0, 1.0, 1}, {1, 1}}, // TR (1)
{{-1, 1, 0}, {0, 222.0/255.0, 1.0, 1}, {0, 1}}, // TL (2)
{{-1, -1, 0}, {0, 167.0/255.0, 253.0/255.0, 1}, {0, 0}} // BL (3)
};
const vertex textureVertices[] = {
{{1, -1, 0}, {0, 0, 0, 0}, {1, 0}}, // BR (0)
{{1, 1, 0}, {0, 0, 0, 0}, {1, 1}}, // TR (1)
{{-1, 1, 0}, {0, 0, 0, 0}, {0, 1}}, // TL (2)
{{-1, -1, 0}, {0, 0, 0, 0}, {0, 0}} // BL (3)
};
const GLubyte indicies[] = {
3, 2, 0, 1
};
渲染代码:
glClear(GL_COLOR_BUFFER_BIT);
GLsizei width, height;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
glViewport(0, 0, width, height);
glBindBuffer(GL_ARRAY_BUFFER, bgVertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glVertexAttribPointer(positionSlot, 3, GL_FLOAT, GL_FALSE, sizeof(vertex), 0);
glVertexAttribPointer(colorSlot, 4, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*3));
glVertexAttribPointer(textureCoordSlot, 2, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*7));
glDrawElements(GL_TRIANGLE_STRIP, sizeof(indicies)/sizeof(indicies[0]), GL_UNSIGNED_BYTE, 0);
glBindBuffer(GL_ARRAY_BUFFER, textureVertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(textureUniform, 0);
glVertexAttribPointer(positionSlot, 3, GL_FLOAT, GL_FALSE, sizeof(vertex), 0);
glVertexAttribPointer(colorSlot, 4, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*3));
glVertexAttribPointer(textureCoordSlot, 2, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*7));
glDrawElements(GL_TRIANGLE_STRIP, sizeof(indicies)/sizeof(indicies[0]), GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
我正在使用混合函数 glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
以防与它有关。
知道问题出在哪里吗?
您的 GL 渲染输出看起来全是像素化的,因为它的像素较少。根据 Drawing and Printing Guide for iOS, the default scale factor for a CAEAGLLayer
is 1.0, so when you set up your GL render buffers, you get one pixel in the buffer per point. (Remember, a point is a unit of UI layout, which on modern devices with Retina displays works out to several hardware pixels.) When you render that buffer full-screen, everything gets scaled up (by about 2.61x on an iPhone 6(s) Plus).
要以原始屏幕分辨率呈现,您需要增加视图的 contentScaleFactor
。 (最好在设置渲染缓冲区之前尽早执行此操作,以便它们从视图层获得新的比例因子。)
不过请注意:您要使用 the UIScreen
property nativeScale
,而不是 scale
。 scale
属性 反映了 UI 渲染,其中,在 iPhone 6(s) Plus 上,一切都以 3x 完成,然后稍微缩小到显示器的原始分辨率. nativeScale
属性 反映了每个点的 实际设备像素 的数量——如果你正在做 GPU 渲染,你想要瞄准它,这样你就不会耗尽通过绘制比您需要的更多的像素来提高性能。 (在 "Plus" iPhones 以外的当前设备上,scale
和 nativeScale
是相同的。但使用后者可能是一个很好的保险政策。)
您可以通过让 GLKView
为您设置渲染缓冲区来避免很多此类问题(以及其他问题)。即使您正在编写跨平台 GL,您的那部分代码无论如何也必须非常特定于平台和设备,因此您最好减少必须编写和维护的代码量。
(为后代解决问题的先前编辑:这与多重采样或 GL 纹理数据的质量无关。多重采样与多边形边缘的光栅化有关 - 中的点多边形的内部每个像素获得一个片段,但边缘上的点获得多个片段,其颜色在解析阶段混合。如果您将纹理绑定到 FBO 并从中获取 glReadPixels
,您会发现该图像与您输入的图像几乎相同。)
我正在尝试将 (以屏幕分辨率) 生成的 Core Graphics 图像绘制到 OpenGL 中。然而,图像渲染比 CG 输出更锯齿(在 CG 中禁用抗锯齿)。 文字为贴图(蓝色背景分别是第一张在Core Graphics中绘制,第二张在OpenGL中绘制)
CG输出:
OpenGL 渲染(在模拟器中):
帧缓冲区设置:
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:context];
glGenRenderbuffers(1, &onscrRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, onscrRenderBuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:self.layer];
glGenFramebuffers(1, &onscrFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, onscrFramebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, onscrRenderBuffer);
纹理加载代码:
-(GLuint) loadTextureFromImage:(UIImage*)image {
CGImageRef textureImage = image.CGImage;
size_t width = CGImageGetWidth(textureImage);
size_t height = CGImageGetHeight(textureImage);
GLubyte* spriteData = (GLubyte*) malloc(width*height*4);
CGColorSpaceRef cs = CGImageGetColorSpace(textureImage);
CGContextRef c = CGBitmapContextCreate(spriteData, width, height, 8, width*4, cs, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(cs);
CGContextScaleCTM(c, 1, -1);
CGContextTranslateCTM(c, 0, -CGContextGetClipBoundingBox(c).size.height);
CGContextDrawImage(c, (CGRect){CGPointZero, {width, height}}, textureImage);
CGContextRelease(c);
GLuint glTex;
glGenTextures(1, &glTex);
glBindTexture(GL_TEXTURE_2D, glTex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (GLsizei)width, (GLsizei)height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
glBindTexture(GL_TEXTURE_2D, 0);
free(spriteData);
return glTex;
}
顶点:
struct vertex {
float position[3];
float color[4];
float texCoord[2];
};
typedef struct vertex vertex;
const vertex bgVertices[] = {
{{1, -1, 0}, {0, 167.0/255.0, 253.0/255.0, 1}, {1, 0}}, // BR (0)
{{1, 1, 0}, {0, 222.0/255.0, 1.0, 1}, {1, 1}}, // TR (1)
{{-1, 1, 0}, {0, 222.0/255.0, 1.0, 1}, {0, 1}}, // TL (2)
{{-1, -1, 0}, {0, 167.0/255.0, 253.0/255.0, 1}, {0, 0}} // BL (3)
};
const vertex textureVertices[] = {
{{1, -1, 0}, {0, 0, 0, 0}, {1, 0}}, // BR (0)
{{1, 1, 0}, {0, 0, 0, 0}, {1, 1}}, // TR (1)
{{-1, 1, 0}, {0, 0, 0, 0}, {0, 1}}, // TL (2)
{{-1, -1, 0}, {0, 0, 0, 0}, {0, 0}} // BL (3)
};
const GLubyte indicies[] = {
3, 2, 0, 1
};
渲染代码:
glClear(GL_COLOR_BUFFER_BIT);
GLsizei width, height;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
glViewport(0, 0, width, height);
glBindBuffer(GL_ARRAY_BUFFER, bgVertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glVertexAttribPointer(positionSlot, 3, GL_FLOAT, GL_FALSE, sizeof(vertex), 0);
glVertexAttribPointer(colorSlot, 4, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*3));
glVertexAttribPointer(textureCoordSlot, 2, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*7));
glDrawElements(GL_TRIANGLE_STRIP, sizeof(indicies)/sizeof(indicies[0]), GL_UNSIGNED_BYTE, 0);
glBindBuffer(GL_ARRAY_BUFFER, textureVertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(textureUniform, 0);
glVertexAttribPointer(positionSlot, 3, GL_FLOAT, GL_FALSE, sizeof(vertex), 0);
glVertexAttribPointer(colorSlot, 4, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*3));
glVertexAttribPointer(textureCoordSlot, 2, GL_FLOAT, GL_FALSE, sizeof(vertex), (GLvoid*)(sizeof(float)*7));
glDrawElements(GL_TRIANGLE_STRIP, sizeof(indicies)/sizeof(indicies[0]), GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
我正在使用混合函数 glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
以防与它有关。
知道问题出在哪里吗?
您的 GL 渲染输出看起来全是像素化的,因为它的像素较少。根据 Drawing and Printing Guide for iOS, the default scale factor for a CAEAGLLayer
is 1.0, so when you set up your GL render buffers, you get one pixel in the buffer per point. (Remember, a point is a unit of UI layout, which on modern devices with Retina displays works out to several hardware pixels.) When you render that buffer full-screen, everything gets scaled up (by about 2.61x on an iPhone 6(s) Plus).
要以原始屏幕分辨率呈现,您需要增加视图的 contentScaleFactor
。 (最好在设置渲染缓冲区之前尽早执行此操作,以便它们从视图层获得新的比例因子。)
不过请注意:您要使用 the UIScreen
property nativeScale
,而不是 scale
。 scale
属性 反映了 UI 渲染,其中,在 iPhone 6(s) Plus 上,一切都以 3x 完成,然后稍微缩小到显示器的原始分辨率. nativeScale
属性 反映了每个点的 实际设备像素 的数量——如果你正在做 GPU 渲染,你想要瞄准它,这样你就不会耗尽通过绘制比您需要的更多的像素来提高性能。 (在 "Plus" iPhones 以外的当前设备上,scale
和 nativeScale
是相同的。但使用后者可能是一个很好的保险政策。)
您可以通过让 GLKView
为您设置渲染缓冲区来避免很多此类问题(以及其他问题)。即使您正在编写跨平台 GL,您的那部分代码无论如何也必须非常特定于平台和设备,因此您最好减少必须编写和维护的代码量。
(为后代解决问题的先前编辑:这与多重采样或 GL 纹理数据的质量无关。多重采样与多边形边缘的光栅化有关 - 中的点多边形的内部每个像素获得一个片段,但边缘上的点获得多个片段,其颜色在解析阶段混合。如果您将纹理绑定到 FBO 并从中获取 glReadPixels
,您会发现该图像与您输入的图像几乎相同。)