在 Three.js 中高效渲染数万个变量 size/color/position 的球体?
Performantly render tens of thousands of spheres of variable size/color/position in Three.js?
这个问题是从我上一个问题中提取的,我发现使用积分会导致问题:
To solve this you'll need to draw your points using quads instead of points. There are many ways to do that. Draw each quad as a separate mesh or sprite, or merge all the quads into another mesh, or use InstancedMesh where you'll need a matrix per point, or write custom shaders to do points (see the last example on this article)
我一直在努力找出这个答案。我的问题是
什么是'instancing'?合并几何体和实例化有什么区别?而且,如果我要执行其中任何一项,我将使用什么几何图形以及我将如何改变颜色?我一直在看这个例子:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_instancing_performance.html
而且我看到对于每个球体,您都会有一个应用位置和大小(比例?)的几何体。那么,底层几何体是单位半径的 SphereBufferGeometry 吗?但是,你如何应用颜色?
此外,我阅读了有关自定义着色器方法的内容,它的含义有些模糊。但是,它似乎更复杂。性能会比上面更好吗?
这是一个相当宽泛的话题。简而言之,合并和实例化都是为了减少渲染时的绘制调用次数。
如果您绑定球体几何体一次,但继续重新渲染它,那么让您的计算机多次绘制它所花费的成本要比您的计算机计算绘制它所需的时间要多。您最终会得到闲置的 GPU,一个强大的并行处理设备。
显然,如果您在 space 中的每个点创建一个唯一的球体,并将它们全部合并,您将付出让 gpu 渲染一次的代价,它会忙于渲染您的数千个球体.
但是,合并它会增加您的内存占用,并且在您实际创建唯一数据时会产生一些开销。实例化是一种以更少的内存成本实现相同效果的内置巧妙方法。
根据您之前的问题...
首先,实例化是一种告诉 three.js 多次绘制相同几何体但每次 "instance" 多改变一个东西的方法。 IIRC three.js 唯一支持开箱即用的是为每个实例设置不同的矩阵(位置、方向、比例)。除此之外,例如具有不同的颜色,您必须编写自定义着色器。
Instancing 允许您要求系统用一个 "ask" 绘制许多东西,而不是每个东西一个 "ask"。这意味着它最终要快得多。你可以把它想象成任何东西。如果想要 3 个汉堡包,你可以让别人给你做 1 个。当他们完成后,你可以让他们再做一个。当他们完成后,您可以要求他们制作第三个。这比一开始只要求他们制作 3 个汉堡包要慢得多。这不是一个完美的类比,但它确实指出了一次请求多个事物的效率如何低于一次请求多个事物的效率。
合并网格是另一种解决方案,按照上面的糟糕类比,合并网格就像制作一个 1 磅重的大汉堡包,而不是三个 1/3 磅重的汉堡包。翻转一个大汉堡并将浇头和面包放在一个大汉堡上比对 3 个小汉堡做同样的事情要快一些。
至于哪个是最适合您的解决方案,这要视情况而定。在您的原始代码中,您只是使用点绘制带纹理的四边形。点总是在屏幕上绘制它们的四边形 space。另一方面,默认情况下,网格在世界 space 中旋转,因此如果您制作四边形或一组合并的四边形实例并尝试旋转它们,它们将转动而不是像点那样面对相机。如果您使用球体几何,那么您会遇到这样的问题,即每个四边形只计算 6 个顶点并在其上绘制一个圆,您将计算每个球体的 100 或 1000 个顶点,这比每个四边形 6 个顶点慢。
因此再次需要自定义着色器来保持点朝向相机。
要通过实例化短版本来做到这一点,您可以决定每个实例重复哪些顶点数据。例如,对于带纹理的四边形,我们需要 6 个顶点位置和 6 个 uv。对于这些你做正常的 BufferAttribute
然后您决定哪些顶点数据对于每个实例是唯一的。在您的情况下,点的大小、颜色和中心。对于其中的每一个,我们制作一个 InstancedBufferAttribute
我们将所有这些属性添加到一个 InstancedBufferGeometry
中,作为最后一个参数,我们告诉它有多少个实例。
开奖的时候可以这么想
- 对于每个实例
- 将尺寸设置为尺寸属性中的下一个值
- 将颜色设置为颜色属性中的下一个值
- 将中心设置为中心属性中的下一个值
- 调用顶点着色器 6 次,将 position 和 uv 设置为其属性中的第 n 个值。
通过这种方式,您可以多次使用相同的几何图形(位置和 UV),但每次都会更改一些值(大小、颜色、中心)。
body {
margin: 0;
}
#c {
width: 100vw;
height: 100vh;
display: block;
}
#info {
position: absolute;
right: 0;
bottom: 0;
color: red;
background: black;
}
<canvas id="c"></canvas>
<div id="info"></div>
<script type="module">
// Three.js - Picking - RayCaster w/Transparency
// from https://threejsfundamentals.org/threejs/threejs-picking-gpu.html
import * as THREE from "https://threejsfundamentals.org/threejs/resources/threejs/r113/build/three.module.js";
function main() {
const infoElem = document.querySelector("#info");
const canvas = document.querySelector("#c");
const renderer = new THREE.WebGLRenderer({ canvas });
const fov = 60;
const aspect = 2; // the canvas default
const near = 0.1;
const far = 200;
const camera = new THREE.PerspectiveCamera(fov, aspect, near, far);
camera.position.z = 30;
const scene = new THREE.Scene();
scene.background = new THREE.Color(0);
const pickingScene = new THREE.Scene();
pickingScene.background = new THREE.Color(0);
// put the camera on a pole (parent it to an object)
// so we can spin the pole to move the camera around the scene
const cameraPole = new THREE.Object3D();
scene.add(cameraPole);
cameraPole.add(camera);
function randomNormalizedColor() {
return Math.random();
}
function getRandomInt(n) {
return Math.floor(Math.random() * n);
}
function getCanvasRelativePosition(e) {
const rect = canvas.getBoundingClientRect();
return {
x: e.clientX - rect.left,
y: e.clientY - rect.top
};
}
const textureLoader = new THREE.TextureLoader();
const particleTexture =
"https://raw.githubusercontent.com/mrdoob/three.js/master/examples/textures/sprites/ball.png";
const vertexShader = `
attribute float size;
attribute vec3 customColor;
attribute vec3 center;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vColor = customColor;
vUv = uv;
vec3 viewOffset = position * size ;
vec4 mvPosition = modelViewMatrix * vec4(center, 1) + vec4(viewOffset, 0);
gl_Position = projectionMatrix * mvPosition;
}
`;
const fragmentShader = `
uniform sampler2D texture;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vec4 tColor = texture2D(texture, vUv);
if (tColor.a < 0.5) discard;
gl_FragColor = mix(vec4(vColor.rgb, 1.0), tColor, 0.1);
}
`;
const pickFragmentShader = `
uniform sampler2D texture;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vec4 tColor = texture2D(texture, vUv);
if (tColor.a < 0.25) discard;
gl_FragColor = vec4(vColor.rgb, 1.0);
}
`;
const materialSettings = {
uniforms: {
texture: {
type: "t",
value: textureLoader.load(particleTexture)
}
},
vertexShader: vertexShader,
fragmentShader: fragmentShader,
blending: THREE.NormalBlending,
depthTest: true,
transparent: false
};
const createParticleMaterial = () => {
const material = new THREE.ShaderMaterial(materialSettings);
return material;
};
const createPickingMaterial = () => {
const material = new THREE.ShaderMaterial({
...materialSettings,
fragmentShader: pickFragmentShader,
blending: THREE.NormalBlending
});
return material;
};
const geometry = new THREE.InstancedBufferGeometry();
const pickingGeometry = new THREE.InstancedBufferGeometry();
const colors = [];
const sizes = [];
const pickingColors = [];
const pickingColor = new THREE.Color();
const centers = [];
const numSpheres = 30;
const positions = [
-0.5, -0.5,
0.5, -0.5,
-0.5, 0.5,
-0.5, 0.5,
0.5, -0.5,
0.5, 0.5,
];
const uvs = [
0, 0,
1, 0,
0, 1,
0, 1,
1, 0,
1, 1,
];
for (let i = 0; i < numSpheres; i++) {
colors[3 * i] = randomNormalizedColor();
colors[3 * i + 1] = randomNormalizedColor();
colors[3 * i + 2] = randomNormalizedColor();
const rgbPickingColor = pickingColor.setHex(i + 1);
pickingColors[3 * i] = rgbPickingColor.r;
pickingColors[3 * i + 1] = rgbPickingColor.g;
pickingColors[3 * i + 2] = rgbPickingColor.b;
sizes[i] = getRandomInt(5);
centers[3 * i] = getRandomInt(20);
centers[3 * i + 1] = getRandomInt(20);
centers[3 * i + 2] = getRandomInt(20);
}
geometry.setAttribute(
"position",
new THREE.Float32BufferAttribute(positions, 2)
);
geometry.setAttribute(
"uv",
new THREE.Float32BufferAttribute(uvs, 2)
);
geometry.setAttribute(
"customColor",
new THREE.InstancedBufferAttribute(new Float32Array(colors), 3)
);
geometry.setAttribute(
"center",
new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
);
geometry.setAttribute(
"size",
new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1));
const material = createParticleMaterial();
const points = new THREE.InstancedMesh(geometry, material, numSpheres);
// setup geometry and material for GPU picking
pickingGeometry.setAttribute(
"position",
new THREE.Float32BufferAttribute(positions, 2)
);
pickingGeometry.setAttribute(
"uv",
new THREE.Float32BufferAttribute(uvs, 2)
);
pickingGeometry.setAttribute(
"customColor",
new THREE.InstancedBufferAttribute(new Float32Array(pickingColors), 3)
);
pickingGeometry.setAttribute(
"center",
new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
);
pickingGeometry.setAttribute(
"size",
new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1)
);
const pickingMaterial = createPickingMaterial();
const pickingPoints = new THREE.InstancedMesh(pickingGeometry, pickingMaterial, numSpheres);
scene.add(points);
pickingScene.add(pickingPoints);
function resizeRendererToDisplaySize(renderer) {
const canvas = renderer.domElement;
const width = canvas.clientWidth;
const height = canvas.clientHeight;
const needResize = canvas.width !== width || canvas.height !== height;
if (needResize) {
renderer.setSize(width, height, false);
}
return needResize;
}
class GPUPickHelper {
constructor() {
// create a 1x1 pixel render target
this.pickingTexture = new THREE.WebGLRenderTarget(1, 1);
this.pixelBuffer = new Uint8Array(4);
}
pick(cssPosition, pickingScene, camera) {
const { pickingTexture, pixelBuffer } = this;
// set the view offset to represent just a single pixel under the mouse
const pixelRatio = renderer.getPixelRatio();
camera.setViewOffset(
renderer.getContext().drawingBufferWidth, // full width
renderer.getContext().drawingBufferHeight, // full top
(cssPosition.x * pixelRatio) | 0, // rect x
(cssPosition.y * pixelRatio) | 0, // rect y
1, // rect width
1 // rect height
);
// render the scene
renderer.setRenderTarget(pickingTexture);
renderer.render(pickingScene, camera);
renderer.setRenderTarget(null);
// clear the view offset so rendering returns to normal
camera.clearViewOffset();
//read the pixel
renderer.readRenderTargetPixels(
pickingTexture,
0, // x
0, // y
1, // width
1, // height
pixelBuffer
);
const id =
(pixelBuffer[0] << 16) | (pixelBuffer[1] << 8) | pixelBuffer[2];
infoElem.textContent = `You clicked sphere number ${id}`;
return id;
}
}
const pickHelper = new GPUPickHelper();
function render(time) {
time *= 0.001; // convert to seconds;
if (resizeRendererToDisplaySize(renderer)) {
const canvas = renderer.domElement;
camera.aspect = canvas.clientWidth / canvas.clientHeight;
camera.updateProjectionMatrix();
}
cameraPole.rotation.y = time * 0.1;
renderer.render(scene, camera);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
function onClick(e) {
const pickPosition = getCanvasRelativePosition(e);
const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
}
function onTouch(e) {
const touch = e.touches[0];
const pickPosition = getCanvasRelativePosition(touch);
const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
}
window.addEventListener("mousedown", onClick);
window.addEventListener("touchstart", onTouch);
}
main();
</script>
这个问题是从我上一个问题中提取的,我发现使用积分会导致问题:
To solve this you'll need to draw your points using quads instead of points. There are many ways to do that. Draw each quad as a separate mesh or sprite, or merge all the quads into another mesh, or use InstancedMesh where you'll need a matrix per point, or write custom shaders to do points (see the last example on this article)
我一直在努力找出这个答案。我的问题是
什么是'instancing'?合并几何体和实例化有什么区别?而且,如果我要执行其中任何一项,我将使用什么几何图形以及我将如何改变颜色?我一直在看这个例子:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_instancing_performance.html
而且我看到对于每个球体,您都会有一个应用位置和大小(比例?)的几何体。那么,底层几何体是单位半径的 SphereBufferGeometry 吗?但是,你如何应用颜色?
此外,我阅读了有关自定义着色器方法的内容,它的含义有些模糊。但是,它似乎更复杂。性能会比上面更好吗?
这是一个相当宽泛的话题。简而言之,合并和实例化都是为了减少渲染时的绘制调用次数。
如果您绑定球体几何体一次,但继续重新渲染它,那么让您的计算机多次绘制它所花费的成本要比您的计算机计算绘制它所需的时间要多。您最终会得到闲置的 GPU,一个强大的并行处理设备。
显然,如果您在 space 中的每个点创建一个唯一的球体,并将它们全部合并,您将付出让 gpu 渲染一次的代价,它会忙于渲染您的数千个球体.
但是,合并它会增加您的内存占用,并且在您实际创建唯一数据时会产生一些开销。实例化是一种以更少的内存成本实现相同效果的内置巧妙方法。
根据您之前的问题...
首先,实例化是一种告诉 three.js 多次绘制相同几何体但每次 "instance" 多改变一个东西的方法。 IIRC three.js 唯一支持开箱即用的是为每个实例设置不同的矩阵(位置、方向、比例)。除此之外,例如具有不同的颜色,您必须编写自定义着色器。
Instancing 允许您要求系统用一个 "ask" 绘制许多东西,而不是每个东西一个 "ask"。这意味着它最终要快得多。你可以把它想象成任何东西。如果想要 3 个汉堡包,你可以让别人给你做 1 个。当他们完成后,你可以让他们再做一个。当他们完成后,您可以要求他们制作第三个。这比一开始只要求他们制作 3 个汉堡包要慢得多。这不是一个完美的类比,但它确实指出了一次请求多个事物的效率如何低于一次请求多个事物的效率。
合并网格是另一种解决方案,按照上面的糟糕类比,合并网格就像制作一个 1 磅重的大汉堡包,而不是三个 1/3 磅重的汉堡包。翻转一个大汉堡并将浇头和面包放在一个大汉堡上比对 3 个小汉堡做同样的事情要快一些。
至于哪个是最适合您的解决方案,这要视情况而定。在您的原始代码中,您只是使用点绘制带纹理的四边形。点总是在屏幕上绘制它们的四边形 space。另一方面,默认情况下,网格在世界 space 中旋转,因此如果您制作四边形或一组合并的四边形实例并尝试旋转它们,它们将转动而不是像点那样面对相机。如果您使用球体几何,那么您会遇到这样的问题,即每个四边形只计算 6 个顶点并在其上绘制一个圆,您将计算每个球体的 100 或 1000 个顶点,这比每个四边形 6 个顶点慢。
因此再次需要自定义着色器来保持点朝向相机。
要通过实例化短版本来做到这一点,您可以决定每个实例重复哪些顶点数据。例如,对于带纹理的四边形,我们需要 6 个顶点位置和 6 个 uv。对于这些你做正常的 BufferAttribute
然后您决定哪些顶点数据对于每个实例是唯一的。在您的情况下,点的大小、颜色和中心。对于其中的每一个,我们制作一个 InstancedBufferAttribute
我们将所有这些属性添加到一个 InstancedBufferGeometry
中,作为最后一个参数,我们告诉它有多少个实例。
开奖的时候可以这么想
- 对于每个实例
- 将尺寸设置为尺寸属性中的下一个值
- 将颜色设置为颜色属性中的下一个值
- 将中心设置为中心属性中的下一个值
- 调用顶点着色器 6 次,将 position 和 uv 设置为其属性中的第 n 个值。
通过这种方式,您可以多次使用相同的几何图形(位置和 UV),但每次都会更改一些值(大小、颜色、中心)。
body {
margin: 0;
}
#c {
width: 100vw;
height: 100vh;
display: block;
}
#info {
position: absolute;
right: 0;
bottom: 0;
color: red;
background: black;
}
<canvas id="c"></canvas>
<div id="info"></div>
<script type="module">
// Three.js - Picking - RayCaster w/Transparency
// from https://threejsfundamentals.org/threejs/threejs-picking-gpu.html
import * as THREE from "https://threejsfundamentals.org/threejs/resources/threejs/r113/build/three.module.js";
function main() {
const infoElem = document.querySelector("#info");
const canvas = document.querySelector("#c");
const renderer = new THREE.WebGLRenderer({ canvas });
const fov = 60;
const aspect = 2; // the canvas default
const near = 0.1;
const far = 200;
const camera = new THREE.PerspectiveCamera(fov, aspect, near, far);
camera.position.z = 30;
const scene = new THREE.Scene();
scene.background = new THREE.Color(0);
const pickingScene = new THREE.Scene();
pickingScene.background = new THREE.Color(0);
// put the camera on a pole (parent it to an object)
// so we can spin the pole to move the camera around the scene
const cameraPole = new THREE.Object3D();
scene.add(cameraPole);
cameraPole.add(camera);
function randomNormalizedColor() {
return Math.random();
}
function getRandomInt(n) {
return Math.floor(Math.random() * n);
}
function getCanvasRelativePosition(e) {
const rect = canvas.getBoundingClientRect();
return {
x: e.clientX - rect.left,
y: e.clientY - rect.top
};
}
const textureLoader = new THREE.TextureLoader();
const particleTexture =
"https://raw.githubusercontent.com/mrdoob/three.js/master/examples/textures/sprites/ball.png";
const vertexShader = `
attribute float size;
attribute vec3 customColor;
attribute vec3 center;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vColor = customColor;
vUv = uv;
vec3 viewOffset = position * size ;
vec4 mvPosition = modelViewMatrix * vec4(center, 1) + vec4(viewOffset, 0);
gl_Position = projectionMatrix * mvPosition;
}
`;
const fragmentShader = `
uniform sampler2D texture;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vec4 tColor = texture2D(texture, vUv);
if (tColor.a < 0.5) discard;
gl_FragColor = mix(vec4(vColor.rgb, 1.0), tColor, 0.1);
}
`;
const pickFragmentShader = `
uniform sampler2D texture;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vec4 tColor = texture2D(texture, vUv);
if (tColor.a < 0.25) discard;
gl_FragColor = vec4(vColor.rgb, 1.0);
}
`;
const materialSettings = {
uniforms: {
texture: {
type: "t",
value: textureLoader.load(particleTexture)
}
},
vertexShader: vertexShader,
fragmentShader: fragmentShader,
blending: THREE.NormalBlending,
depthTest: true,
transparent: false
};
const createParticleMaterial = () => {
const material = new THREE.ShaderMaterial(materialSettings);
return material;
};
const createPickingMaterial = () => {
const material = new THREE.ShaderMaterial({
...materialSettings,
fragmentShader: pickFragmentShader,
blending: THREE.NormalBlending
});
return material;
};
const geometry = new THREE.InstancedBufferGeometry();
const pickingGeometry = new THREE.InstancedBufferGeometry();
const colors = [];
const sizes = [];
const pickingColors = [];
const pickingColor = new THREE.Color();
const centers = [];
const numSpheres = 30;
const positions = [
-0.5, -0.5,
0.5, -0.5,
-0.5, 0.5,
-0.5, 0.5,
0.5, -0.5,
0.5, 0.5,
];
const uvs = [
0, 0,
1, 0,
0, 1,
0, 1,
1, 0,
1, 1,
];
for (let i = 0; i < numSpheres; i++) {
colors[3 * i] = randomNormalizedColor();
colors[3 * i + 1] = randomNormalizedColor();
colors[3 * i + 2] = randomNormalizedColor();
const rgbPickingColor = pickingColor.setHex(i + 1);
pickingColors[3 * i] = rgbPickingColor.r;
pickingColors[3 * i + 1] = rgbPickingColor.g;
pickingColors[3 * i + 2] = rgbPickingColor.b;
sizes[i] = getRandomInt(5);
centers[3 * i] = getRandomInt(20);
centers[3 * i + 1] = getRandomInt(20);
centers[3 * i + 2] = getRandomInt(20);
}
geometry.setAttribute(
"position",
new THREE.Float32BufferAttribute(positions, 2)
);
geometry.setAttribute(
"uv",
new THREE.Float32BufferAttribute(uvs, 2)
);
geometry.setAttribute(
"customColor",
new THREE.InstancedBufferAttribute(new Float32Array(colors), 3)
);
geometry.setAttribute(
"center",
new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
);
geometry.setAttribute(
"size",
new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1));
const material = createParticleMaterial();
const points = new THREE.InstancedMesh(geometry, material, numSpheres);
// setup geometry and material for GPU picking
pickingGeometry.setAttribute(
"position",
new THREE.Float32BufferAttribute(positions, 2)
);
pickingGeometry.setAttribute(
"uv",
new THREE.Float32BufferAttribute(uvs, 2)
);
pickingGeometry.setAttribute(
"customColor",
new THREE.InstancedBufferAttribute(new Float32Array(pickingColors), 3)
);
pickingGeometry.setAttribute(
"center",
new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
);
pickingGeometry.setAttribute(
"size",
new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1)
);
const pickingMaterial = createPickingMaterial();
const pickingPoints = new THREE.InstancedMesh(pickingGeometry, pickingMaterial, numSpheres);
scene.add(points);
pickingScene.add(pickingPoints);
function resizeRendererToDisplaySize(renderer) {
const canvas = renderer.domElement;
const width = canvas.clientWidth;
const height = canvas.clientHeight;
const needResize = canvas.width !== width || canvas.height !== height;
if (needResize) {
renderer.setSize(width, height, false);
}
return needResize;
}
class GPUPickHelper {
constructor() {
// create a 1x1 pixel render target
this.pickingTexture = new THREE.WebGLRenderTarget(1, 1);
this.pixelBuffer = new Uint8Array(4);
}
pick(cssPosition, pickingScene, camera) {
const { pickingTexture, pixelBuffer } = this;
// set the view offset to represent just a single pixel under the mouse
const pixelRatio = renderer.getPixelRatio();
camera.setViewOffset(
renderer.getContext().drawingBufferWidth, // full width
renderer.getContext().drawingBufferHeight, // full top
(cssPosition.x * pixelRatio) | 0, // rect x
(cssPosition.y * pixelRatio) | 0, // rect y
1, // rect width
1 // rect height
);
// render the scene
renderer.setRenderTarget(pickingTexture);
renderer.render(pickingScene, camera);
renderer.setRenderTarget(null);
// clear the view offset so rendering returns to normal
camera.clearViewOffset();
//read the pixel
renderer.readRenderTargetPixels(
pickingTexture,
0, // x
0, // y
1, // width
1, // height
pixelBuffer
);
const id =
(pixelBuffer[0] << 16) | (pixelBuffer[1] << 8) | pixelBuffer[2];
infoElem.textContent = `You clicked sphere number ${id}`;
return id;
}
}
const pickHelper = new GPUPickHelper();
function render(time) {
time *= 0.001; // convert to seconds;
if (resizeRendererToDisplaySize(renderer)) {
const canvas = renderer.domElement;
camera.aspect = canvas.clientWidth / canvas.clientHeight;
camera.updateProjectionMatrix();
}
cameraPole.rotation.y = time * 0.1;
renderer.render(scene, camera);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
function onClick(e) {
const pickPosition = getCanvasRelativePosition(e);
const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
}
function onTouch(e) {
const touch = e.touches[0];
const pickPosition = getCanvasRelativePosition(touch);
const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
}
window.addEventListener("mousedown", onClick);
window.addEventListener("touchstart", onTouch);
}
main();
</script>