如何在 threejs / webgl 中直接在 3D 纹理上有效渲染?
How to render directly on a 3D texture efficiently in threejs / webgl?
我目前正在研究流体模拟。我在 3D 中工作,输入和输出也是如此。每个着色器采用一个或多个 3D 样本,理想情况下应该输出 3D 数据。
目前,我正在对每个平面上的 3D 立方体和 运行 着色器进行切片。此方法有效,但随后我需要将数据从每个 2D 纹理复制到 CPU 以重建 3D 纹理并将其发送回 GPU。复制步骤非常慢,我认为这种方法不是最优的。
const vertexShaderPlane = `#version 300 es
precision highp float;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
in vec3 position;
out vec3 vPosition;
void main() {
vPosition = position;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position.xy, 0., 1. );
}
`
const fragmentShaderPlane = `#version 300 es
precision highp float;
precision highp sampler3D;
uniform float uZ;
in vec3 vPosition;
out vec4 out_FragColor;
void main() {
out_FragColor = vec4(vPosition.xy, uZ, 1.);
}`
const vertexShaderCube = `#version 300 es
precision highp float;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
in vec3 position;
out vec3 vPosition;
void main() {
vPosition = position;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
`
const fragmentShaderCube = `#version 300 es
precision highp float;
precision highp sampler3D;
uniform sampler3D sBuffer;
in vec3 vPosition;
out vec4 out_FragColor;
void main() {
vec4 data = texture(sBuffer, vec3(vPosition));
out_FragColor = vec4(data);
}
`
const canvas = document.createElement('canvas')
const context = canvas.getContext('webgl2', { alpha: false, antialias: false })
const scene = new THREE.Scene()
const renderer = new THREE.WebGLRenderer({ canvas, context })
const cameras = {
perspective: new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 0.1, 50000),
texture: new THREE.OrthographicCamera(-0.5, 0.5, 0.5, -0.5, 0, 1)
}
renderer.autoClear = false
renderer.setPixelRatio(window.devicePixelRatio)
renderer.setSize(window.innerWidth, window.innerHeight)
// cameras.perspective.position.set(2, 2, 2)
document.body.appendChild(renderer.domElement)
// Uniforms
const planeUniforms = { uZ: { value: 0.0 } }
const cubeUniforms = { sBuffer: { value: null } }
// Plane (2D)
const materialPlane = new THREE.RawShaderMaterial({
uniforms: planeUniforms,
vertexShader: vertexShaderPlane,
fragmentShader: fragmentShaderPlane,
depthTest: true,
depthWrite: true
})
const planeGeometry = new THREE.BufferGeometry()
const vertices = new Float32Array([
0, 0, 0,
1, 0, 0,
1, 1, 0,
1, 1, 0,
0, 1, 0,
0, 0, 0
])
planeGeometry.setAttribute('position', new THREE.BufferAttribute(vertices, 3))
const plane = new THREE.Mesh(planeGeometry, materialPlane)
plane.position.set(-0.5, -0.5, -0.5)
scene.add(plane)
// Cube (3D)
const materialCube = new THREE.RawShaderMaterial({
uniforms: cubeUniforms,
vertexShader: vertexShaderCube,
fragmentShader: fragmentShaderCube,
depthTest: true,
depthWrite: true,
visible: false
})
const cube = new THREE.Group()
for (let x = 0; x < 32; x++) {
const offset = x / 32
const geometry = new THREE.BufferGeometry()
const vertices = new Float32Array([
0, 0, offset,
1, 0, offset,
1, 1, offset,
1, 1, offset,
0, 1, offset,
0, 0, offset
])
geometry.setAttribute('position', new THREE.BufferAttribute(vertices, 3))
const mesh = new THREE.Mesh(geometry, materialCube)
cube.add(mesh)
}
cube.position.set(-0.5, 0, -2)
cube.scale.set(0.5, 0.5, 0.5)
cube.rotation.set(1, 1, 1)
scene.add(cube)
// Computing Step
const texture2D = new THREE.WebGLRenderTarget(32, 32, { type: THREE.FloatType })
const planeSize = (32 ** 2 * 4)
const pixelBuffers = Array.from(Array(32), () => new Float32Array(planeSize))
const data = new Float32Array(planeSize * 32)
renderer.setRenderTarget(texture2D)
for (let i = 0; i < 32; i++) {
materialPlane.uniforms.uZ.value = i / 32
renderer.render(scene, cameras.texture)
renderer.readRenderTargetPixels(texture2D, 0, 0, 32, 32, pixelBuffers[i]) // SLOW PART
data.set(pixelBuffers[i], i * planeSize)
}
const texture3D = new THREE.DataTexture3D(data, 32, 32, 32)
texture3D.format = THREE.RGBAFormat
texture3D.type = THREE.FloatType
texture3D.unpackAlignment = 1
materialPlane.visible = false
// Display Step
materialCube.visible = true
cubeUniforms.sBuffer.value = texture3D
renderer.setRenderTarget(null)
renderer.render(scene, cameras.perspective)
<script src="https://threejs.org/build/three.min.js"></script>
我强调渲染有效。只是 非常慢 因为我必须执行一打着色器。
我找到的潜在解决方案如下:
- 使用
render.copyFramebufferToTexture
函数将数据直接复制到新纹理中。不幸的是,我认为它只适用于 2D 而不是 3D 纹理。
- 使用 web worker 来拆分任务并为每个 worker 呈现一个计划。但是,要将数据传递给工作人员,您必须复制它们,这又回到了最初的问题。传输数据也不会高效,因为当一名工人完成他的工作时,其他人将无法访问数据。
编辑:
我只是在寻找一种方法来加快将 2D 数据复制到 CPU 以在 GPU 上制作回 3D 纹理的过程。
真正的问题是 renderer.readRenderTargetPixels
确实减慢了我的渲染速度。
如@ScieCode 所述,您无法在 WebGL/WebGL2 中写入 3D 纹理,但可以将 2D 纹理用作 3D 数据。假设我们有一个 4x4x4 的 3D 纹理。我们可以将其存储在 2D 纹理中。那是 4 片 4x4。我们可能会安排那些切片
00001111
00001111
00001111
00001111
22223333
22223333
22223333
22223333
从用作 3D 数据的 2D 纹理中获取像素
ivec3 src = ... // some 3D coord
int cubeSize = 4; // could pass in as uniform
ivec2 slices = size / cubeSize;
ivec2 size = textureSize(some2DSampler, 0);
ivec2 src2D = ivec2(
src.x + (src.z % slices.x) * cubeSize,
src.y + (src.z / slices.x) * cubeSize);
vec4 color = texelFetch(some2DSampler, src2D, 0);
如果我们在整个纹理上渲染单个四边形,我们就知道当前正在用 3D 写入哪个像素
// assume size is the same as the texture above, otherwise pass it in
// as a uniform
int cubeSize = 4; // could pass in as uniform
ivec2 slices = size / cubeSize;
ivec2 dst2D = ivec2(gl_FragCoord.xy);
ivec3 dst = ivec3(
dst2D.x % cubeSize,
dst2D.y % cubeSize,
dst2D.x / cubeSize + (dst2D.y / cubeSize) * slices.x);
上面的代码假定立方体的每个维度都是相同的大小。更通用的东西,假设我们有一个 5x4x6 的立方体。我们可以将其布置为 3x2 切片
000001111122222
000001111122222
000001111122222
000001111122222
333334444455555
333334444455555
333334444455555
333334444455555
ivec3 src = ... // some 3D coord
ivec3 cubeSize = ivec3(5, 4, 6); // could pass in as uniform
ivec2 size = textureSize(some2DSampler, 0);
int slicesAcross = size.x / cubeSize.x;
ivec2 src2D = ivec2(
src.x + (src.z % slicesAcross) * cubeSize,
src.y + (src.z / slicesAcross) * cubeSize);
vec4 color = texelFetch(some2DSampler, src2D, 0);
ivec3 cubeSize = ivec3(5, 4, 6); // could pass in as uniform
ivec2 slicesAcross = size.x / cubeSize;
ivec2 dst2D = ivec2(gl_FragCoord.xy);
ivec3 dst = ivec3(
dst2D.x % cubeSize.x,
dst2D.y % cubeSize.y,
dst2D.x / cubeSize.x + (dst2D.y / cubeSize.y) * slicesAcross);
我目前正在研究流体模拟。我在 3D 中工作,输入和输出也是如此。每个着色器采用一个或多个 3D 样本,理想情况下应该输出 3D 数据。
目前,我正在对每个平面上的 3D 立方体和 运行 着色器进行切片。此方法有效,但随后我需要将数据从每个 2D 纹理复制到 CPU 以重建 3D 纹理并将其发送回 GPU。复制步骤非常慢,我认为这种方法不是最优的。
const vertexShaderPlane = `#version 300 es
precision highp float;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
in vec3 position;
out vec3 vPosition;
void main() {
vPosition = position;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position.xy, 0., 1. );
}
`
const fragmentShaderPlane = `#version 300 es
precision highp float;
precision highp sampler3D;
uniform float uZ;
in vec3 vPosition;
out vec4 out_FragColor;
void main() {
out_FragColor = vec4(vPosition.xy, uZ, 1.);
}`
const vertexShaderCube = `#version 300 es
precision highp float;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
in vec3 position;
out vec3 vPosition;
void main() {
vPosition = position;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
`
const fragmentShaderCube = `#version 300 es
precision highp float;
precision highp sampler3D;
uniform sampler3D sBuffer;
in vec3 vPosition;
out vec4 out_FragColor;
void main() {
vec4 data = texture(sBuffer, vec3(vPosition));
out_FragColor = vec4(data);
}
`
const canvas = document.createElement('canvas')
const context = canvas.getContext('webgl2', { alpha: false, antialias: false })
const scene = new THREE.Scene()
const renderer = new THREE.WebGLRenderer({ canvas, context })
const cameras = {
perspective: new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 0.1, 50000),
texture: new THREE.OrthographicCamera(-0.5, 0.5, 0.5, -0.5, 0, 1)
}
renderer.autoClear = false
renderer.setPixelRatio(window.devicePixelRatio)
renderer.setSize(window.innerWidth, window.innerHeight)
// cameras.perspective.position.set(2, 2, 2)
document.body.appendChild(renderer.domElement)
// Uniforms
const planeUniforms = { uZ: { value: 0.0 } }
const cubeUniforms = { sBuffer: { value: null } }
// Plane (2D)
const materialPlane = new THREE.RawShaderMaterial({
uniforms: planeUniforms,
vertexShader: vertexShaderPlane,
fragmentShader: fragmentShaderPlane,
depthTest: true,
depthWrite: true
})
const planeGeometry = new THREE.BufferGeometry()
const vertices = new Float32Array([
0, 0, 0,
1, 0, 0,
1, 1, 0,
1, 1, 0,
0, 1, 0,
0, 0, 0
])
planeGeometry.setAttribute('position', new THREE.BufferAttribute(vertices, 3))
const plane = new THREE.Mesh(planeGeometry, materialPlane)
plane.position.set(-0.5, -0.5, -0.5)
scene.add(plane)
// Cube (3D)
const materialCube = new THREE.RawShaderMaterial({
uniforms: cubeUniforms,
vertexShader: vertexShaderCube,
fragmentShader: fragmentShaderCube,
depthTest: true,
depthWrite: true,
visible: false
})
const cube = new THREE.Group()
for (let x = 0; x < 32; x++) {
const offset = x / 32
const geometry = new THREE.BufferGeometry()
const vertices = new Float32Array([
0, 0, offset,
1, 0, offset,
1, 1, offset,
1, 1, offset,
0, 1, offset,
0, 0, offset
])
geometry.setAttribute('position', new THREE.BufferAttribute(vertices, 3))
const mesh = new THREE.Mesh(geometry, materialCube)
cube.add(mesh)
}
cube.position.set(-0.5, 0, -2)
cube.scale.set(0.5, 0.5, 0.5)
cube.rotation.set(1, 1, 1)
scene.add(cube)
// Computing Step
const texture2D = new THREE.WebGLRenderTarget(32, 32, { type: THREE.FloatType })
const planeSize = (32 ** 2 * 4)
const pixelBuffers = Array.from(Array(32), () => new Float32Array(planeSize))
const data = new Float32Array(planeSize * 32)
renderer.setRenderTarget(texture2D)
for (let i = 0; i < 32; i++) {
materialPlane.uniforms.uZ.value = i / 32
renderer.render(scene, cameras.texture)
renderer.readRenderTargetPixels(texture2D, 0, 0, 32, 32, pixelBuffers[i]) // SLOW PART
data.set(pixelBuffers[i], i * planeSize)
}
const texture3D = new THREE.DataTexture3D(data, 32, 32, 32)
texture3D.format = THREE.RGBAFormat
texture3D.type = THREE.FloatType
texture3D.unpackAlignment = 1
materialPlane.visible = false
// Display Step
materialCube.visible = true
cubeUniforms.sBuffer.value = texture3D
renderer.setRenderTarget(null)
renderer.render(scene, cameras.perspective)
<script src="https://threejs.org/build/three.min.js"></script>
我强调渲染有效。只是 非常慢 因为我必须执行一打着色器。
我找到的潜在解决方案如下:
- 使用
render.copyFramebufferToTexture
函数将数据直接复制到新纹理中。不幸的是,我认为它只适用于 2D 而不是 3D 纹理。 - 使用 web worker 来拆分任务并为每个 worker 呈现一个计划。但是,要将数据传递给工作人员,您必须复制它们,这又回到了最初的问题。传输数据也不会高效,因为当一名工人完成他的工作时,其他人将无法访问数据。
编辑:
我只是在寻找一种方法来加快将 2D 数据复制到 CPU 以在 GPU 上制作回 3D 纹理的过程。
真正的问题是 renderer.readRenderTargetPixels
确实减慢了我的渲染速度。
如@ScieCode 所述,您无法在 WebGL/WebGL2 中写入 3D 纹理,但可以将 2D 纹理用作 3D 数据。假设我们有一个 4x4x4 的 3D 纹理。我们可以将其存储在 2D 纹理中。那是 4 片 4x4。我们可能会安排那些切片
00001111
00001111
00001111
00001111
22223333
22223333
22223333
22223333
从用作 3D 数据的 2D 纹理中获取像素
ivec3 src = ... // some 3D coord
int cubeSize = 4; // could pass in as uniform
ivec2 slices = size / cubeSize;
ivec2 size = textureSize(some2DSampler, 0);
ivec2 src2D = ivec2(
src.x + (src.z % slices.x) * cubeSize,
src.y + (src.z / slices.x) * cubeSize);
vec4 color = texelFetch(some2DSampler, src2D, 0);
如果我们在整个纹理上渲染单个四边形,我们就知道当前正在用 3D 写入哪个像素
// assume size is the same as the texture above, otherwise pass it in
// as a uniform
int cubeSize = 4; // could pass in as uniform
ivec2 slices = size / cubeSize;
ivec2 dst2D = ivec2(gl_FragCoord.xy);
ivec3 dst = ivec3(
dst2D.x % cubeSize,
dst2D.y % cubeSize,
dst2D.x / cubeSize + (dst2D.y / cubeSize) * slices.x);
上面的代码假定立方体的每个维度都是相同的大小。更通用的东西,假设我们有一个 5x4x6 的立方体。我们可以将其布置为 3x2 切片
000001111122222
000001111122222
000001111122222
000001111122222
333334444455555
333334444455555
333334444455555
333334444455555
ivec3 src = ... // some 3D coord
ivec3 cubeSize = ivec3(5, 4, 6); // could pass in as uniform
ivec2 size = textureSize(some2DSampler, 0);
int slicesAcross = size.x / cubeSize.x;
ivec2 src2D = ivec2(
src.x + (src.z % slicesAcross) * cubeSize,
src.y + (src.z / slicesAcross) * cubeSize);
vec4 color = texelFetch(some2DSampler, src2D, 0);
ivec3 cubeSize = ivec3(5, 4, 6); // could pass in as uniform
ivec2 slicesAcross = size.x / cubeSize;
ivec2 dst2D = ivec2(gl_FragCoord.xy);
ivec3 dst = ivec3(
dst2D.x % cubeSize.x,
dst2D.y % cubeSize.y,
dst2D.x / cubeSize.x + (dst2D.y / cubeSize.y) * slicesAcross);