iOS、android 之间的 glTexImage2D 错误细微差别 - 不一致的文档

glTexImage2D error subtleties between iOS, android- inconsistent documentation

所以我有这行代码:

glTexImage2D(GL_TEXTURE_2D,0,GL_DEPTH_COMPONENT,shadow_tex_dim.x,shadow_tex_dim.y,0,GL_DEPTH_COMPONENT,GL_FLOAT,shadow_texture_data);

可以很好地在 android(运行ning OpenGLES2)(和 OSX 上建立深度纹理).

当我运行它在iOS(iOS10,也运行宁OpenGLES2),glGetError()returns GL_INVALID_OPERATION。 (glGetError() 就在这一行之前 returns 干净)。

这是 glTexImage2D 的文档:http://docs.gl/es2/glTexImage2D

请注意,'internalformat' 指定唯一有效的参数是“GL_ALPHAGL_LUMINANCEGL_LUMINANCE_ALPHAGL_RGBGL_RGBA” ,但在 "examples" 部分下方,它显示 glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT, fbo_width, fbo_height, 0,GL_DEPTH_COMPONENT,GL_UNSIGNED_BYTE, NULL); (这与我当前的行非常相似,但使用 GL_UNSIGNED_BYTE 而不是 GL_FLOAT.

那么,我可以使用 GL_DEPTH_COMPONENT 吗?为什么这适用于 android 的 OpenGLES2 而不是 iOS 的?我从哪里知道我应该使用 GL_FLOAT(请注意,iOS android 上的行为似乎都没有改变...)?

Apple 对深度纹理的支持将在此处定义:https://www.khronos.org/registry/gles/extensions/OES/OES_depth_texture.txt

文档中有 2 个相关区域:

Textures with and values of DEPTH_COMPONENT refer to a texture that contains depth component data. is used to determine the number of bits used to specify depth texel values.

A value of UNSIGNED_SHORT refers to a 16-bit depth value. A value of UNSIGNED_INT refers to a 32-bit depth value.

The error INVALID_OPERATION is generated if the and is DEPTH_COMPONENT and is not UNSIGNED_SHORT, or UNSIGNED_INT.

这也很有趣:https://www.opengl.org/wiki/Common_Mistakes

In OpenGL, all depth values lie in the range [0, 1]. The integer normalization process simply converts this floating-point range into integer values of the appropriate precision. It is the integer value that is stored in the depth buffer.

Typically, 24-bit depth buffers will pad each depth value out to 32-bits, so 8-bits per pixel will go unused. However, if you ask for an 8-bit Stencil Buffer along with the depth buffer, the two separate images will generally be combined into a single depth/stencil image. 24-bits will be used for depth, and the remaining 8-bits for stencil.

Now that the misconception about depth buffers being floating point is resolved, what is wrong with this call?

glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, my pixels);

Because the depth format is a normalized integer format, the driver will have to use the CPU to convert the normalized integer data into floating-point values. This is slow.

看起来 Android 支持 GL_FLOAT 类型的深度纹理。