将数据存储为纹理以用于实例化几何体的顶点着色器(三个 JS / GLSL)
Storing data as a texture for use in Vertex Shader for Instanced Geometry (THREE JS / GLSL)
我正在使用 THREE.InstancedBufferGeometry,我希望访问顶点着色器中编码到纹理中的数据。
我想做的是创建一个每个实例一个像素的数据纹理,它将存储每个实例的位置数据(然后在稍后阶段,我可以使用流场模拟更新纹理以为位置设置动画)。
我很难从顶点着色器中的纹理访问数据。
const INSTANCES_COUNT = 5000;
// FOR EVERY INSTANCE, GIVE IT A RANDOM X, Y, Z OFFSET, AND SAVE IT IN DATA TEXTURE
const data = new Uint8Array(4 * INSTANCES_COUNT);
for (let i = 0; i < INSTANCES_COUNT; i++) {
const stride = i * 4;
data[stride] = (Math.random() - 0.5);
data[stride + 1] = (Math.random() - 0.5);
data[stride + 2] = (Math.random() - 0.5);
data[stride + 3] = 0.0;
}
const offsetTexture = new THREE.DataTexture( data, INSTANCES, 1, THREE.RGBAFormat, THREE.FloatType );
offsetTexture.minFilter = THREE.NearestFilter;
offsetTexture.magFilter = THREE.NearestFilter;
offsetTexture.generateMipmaps = false;
offsetTexture.needsUpdate = true;
// CREATE MY INSTANCED GEOMETRY
const geometry = new THREE.InstancedBufferGeometry();
geometry.maxInstancedCount = INSTANCES_COUNT;
geometry.addAttribute( 'position', new THREE.Float32BufferAttribute([5, -5, 0, -5, 5, 0, 0, 0, 5], 3 )); // SIMPLE TRIANGLE
const vertexShader = `
precision highp float;
uniform vec3 color;
uniform sampler2D offsetTexture;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
attribute vec3 position;
varying vec3 vPosition;
varying vec3 vColor;
void main(){
vPosition = position;
vec4 orientation = vec4(.0, .0, .0, .0);
vec3 vcV = cross( orientation.xyz, vPosition );
vPosition = vcV * ( 2.0 * orientation.w ) + ( cross( orientation.xyz, vcV ) * 2.0 + vPosition );
vec2 uv = position.xy;
vec4 data = texture2D( offsetTexture, uv );
vec3 particlePosition = data.xyz * 1000.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4( vPosition + particlePosition, 1.0 );
}
`;
const fragmentShader = `
precision highp float;
varying vec3 vColor;
void main() {
gl_FragColor = vec4(vColor, 1.0);
}
`;
const uniforms = {
size: { value: 1.0 },
color: {
type: 'c',
value: new THREE.Color(0x3db230),
},
offsetTexture: {
type: 't',
value: offsetTexture,
},
};
// CREATE MY MATERIAL
const material = new THREE.RawShaderMaterial({
uniforms,
vertexShader,
fragmentShader,
side: THREE.DoubleSide,
transparent: false,
});
scene.add(new THREE.Mesh(geometry, material));
目前似乎无法在顶点着色器中访问图像中的数据(例如,如果我只是将 vUv 设置为 vec2(1.0, 0.0),并更改偏移位置,没有任何变化),而且我不确定如何着手确保实例可以在纹理中引用正确的纹素。
所以,我的两个问题是:
1) 如何正确设置数据图像纹理,并在顶点着色器中访问该数据
2) 如何正确引用存储每个特定实例数据的纹素(例如,实例 1000 应使用 vec2(1000,1) 等
此外,我是否必须对数据进行标准化(0.0-1.0,或 0–255,或 -1 – +1)
谢谢
您需要为每个实例的纹理计算某种索引。
意思是,您需要一个将由每个实例共享的属性。
如果你的三角形是
[a,b,c]
你的索引应该是
[0,0,0]
假设您有 1024 个实例和 1024x1 像素的纹理。
attribute float aIndex;
vec2 myIndex = ((aIndex + 0.5)/1024.,1.);
vec4 myRes = texture2D( mySampler, myIndex);
我正在使用 THREE.InstancedBufferGeometry,我希望访问顶点着色器中编码到纹理中的数据。
我想做的是创建一个每个实例一个像素的数据纹理,它将存储每个实例的位置数据(然后在稍后阶段,我可以使用流场模拟更新纹理以为位置设置动画)。
我很难从顶点着色器中的纹理访问数据。
const INSTANCES_COUNT = 5000;
// FOR EVERY INSTANCE, GIVE IT A RANDOM X, Y, Z OFFSET, AND SAVE IT IN DATA TEXTURE
const data = new Uint8Array(4 * INSTANCES_COUNT);
for (let i = 0; i < INSTANCES_COUNT; i++) {
const stride = i * 4;
data[stride] = (Math.random() - 0.5);
data[stride + 1] = (Math.random() - 0.5);
data[stride + 2] = (Math.random() - 0.5);
data[stride + 3] = 0.0;
}
const offsetTexture = new THREE.DataTexture( data, INSTANCES, 1, THREE.RGBAFormat, THREE.FloatType );
offsetTexture.minFilter = THREE.NearestFilter;
offsetTexture.magFilter = THREE.NearestFilter;
offsetTexture.generateMipmaps = false;
offsetTexture.needsUpdate = true;
// CREATE MY INSTANCED GEOMETRY
const geometry = new THREE.InstancedBufferGeometry();
geometry.maxInstancedCount = INSTANCES_COUNT;
geometry.addAttribute( 'position', new THREE.Float32BufferAttribute([5, -5, 0, -5, 5, 0, 0, 0, 5], 3 )); // SIMPLE TRIANGLE
const vertexShader = `
precision highp float;
uniform vec3 color;
uniform sampler2D offsetTexture;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
attribute vec3 position;
varying vec3 vPosition;
varying vec3 vColor;
void main(){
vPosition = position;
vec4 orientation = vec4(.0, .0, .0, .0);
vec3 vcV = cross( orientation.xyz, vPosition );
vPosition = vcV * ( 2.0 * orientation.w ) + ( cross( orientation.xyz, vcV ) * 2.0 + vPosition );
vec2 uv = position.xy;
vec4 data = texture2D( offsetTexture, uv );
vec3 particlePosition = data.xyz * 1000.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4( vPosition + particlePosition, 1.0 );
}
`;
const fragmentShader = `
precision highp float;
varying vec3 vColor;
void main() {
gl_FragColor = vec4(vColor, 1.0);
}
`;
const uniforms = {
size: { value: 1.0 },
color: {
type: 'c',
value: new THREE.Color(0x3db230),
},
offsetTexture: {
type: 't',
value: offsetTexture,
},
};
// CREATE MY MATERIAL
const material = new THREE.RawShaderMaterial({
uniforms,
vertexShader,
fragmentShader,
side: THREE.DoubleSide,
transparent: false,
});
scene.add(new THREE.Mesh(geometry, material));
目前似乎无法在顶点着色器中访问图像中的数据(例如,如果我只是将 vUv 设置为 vec2(1.0, 0.0),并更改偏移位置,没有任何变化),而且我不确定如何着手确保实例可以在纹理中引用正确的纹素。
所以,我的两个问题是: 1) 如何正确设置数据图像纹理,并在顶点着色器中访问该数据 2) 如何正确引用存储每个特定实例数据的纹素(例如,实例 1000 应使用 vec2(1000,1) 等
此外,我是否必须对数据进行标准化(0.0-1.0,或 0–255,或 -1 – +1)
谢谢
您需要为每个实例的纹理计算某种索引。 意思是,您需要一个将由每个实例共享的属性。
如果你的三角形是
[a,b,c]
你的索引应该是
[0,0,0]
假设您有 1024 个实例和 1024x1 像素的纹理。
attribute float aIndex;
vec2 myIndex = ((aIndex + 0.5)/1024.,1.);
vec4 myRes = texture2D( mySampler, myIndex);