WebGL 基本着色器混淆
WebGL basic shader confusion
我正在学习 WebGL 着色器,但它们让我感到困惑。这是我到目前为止得到的:
<script type="x-shader/x-vertex" id="vertexshader">
#ifdef GL_ES
precision highp float;
#endif
void main()
{
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
#ifdef GL_ES
precision highp float;
#endif
void main()
{
gl_FragColor = vec4(1.0, 0.0, 1.0, 1.0);
}
</script>
到目前为止一切顺利,编译成功,我得到了一个粉红色的立方体。
现在混乱了。据我了解,片段着色器用于修改颜色,顶点着色器用于修改形状。
我不明白 gl_FragColor 是为整个对象设置颜色,还是按照某种顺序绘制,以便我可以在着色器中操纵坐标,使其随机着色?
如果是这样,它怎么知道什么形状和着色顺序?
此外,如果我只想使用片段着色器,为什么还需要定义顶点着色器?默认的 gl_Position 行有什么作用,为什么需要它?
到目前为止我试过的所有 GLSL 教程,代码都无法运行,three.js 编译失败。从哪里开始有什么建议吗?
这个问题很宽泛。
假设你做这样的事情:
var myRenderer = new THREE.WebGLRenderer();
var myScene = new THREE.Scene();
var myTexture = new THREE.Texture();
var myColor = new THREE.Color();
var myMaterial = new THREE.MeshBasicMaterial({color:myColor, map:myTexture});
var myColoredAndTexturedCube = new THREE.Mesh( new THREE.CubeGeometry(), myMaterial);
var myCamera = new THREE.Camera();
如果你连接所有这些,你将在屏幕上渲染一个立方体,如果你提供颜色和纹理,它会显示两者(纹理被颜色着色)。
虽然在幕后发生了很多事情。 Three.js 将通过 WebGL API 向 gpu 发出指令。这些是非常低级的调用,例如 'take this chunk of memory and have it ready to be drawn' 'prepare this shader to process this chunk of memory' 'set blending mode for this call'.
I don't understand does the gl_FragColor sets the color for the whole object or it's drawn in some kind of order where I could manipulate the cordinates in the shader so that it gets colored randomply for example?
If so, how does it know what shape and the coloring order?
你应该了解一下渲染管线,也许一开始你不会完全理解它,但它肯定可以澄清一些事情。
gl_FragColor 设置缓冲区中像素的颜色(可以是您的屏幕,可以是屏幕外纹理)。是的,它设置了 'entire object' 的颜色,但是整个对象可以是粒子云(您可以将其解释为多个对象)。您可以有一个 10x10 立方体的网格,每个立方体的颜色不同,但仍然通过一次绘制调用(一个对象)进行渲染。
所以回到你的 shder:
//you dont see this, but three injects this for you, try intentionally adding a mistake to your shader, when your debugger complains, youll see the entire shader and these lines in it
uniform mat4 projectionMatrix; //one mat4 shared across all vertices/pixels
uniform mat4 modelViewMatrix; //one mat4 shared across all vertices/pixels
attribute vec3 position; //actual vertex, this value is different in each vertex
//try adding this
varying vec2 vUv;
void main()
{
vUv = uv; //uv, just like position, is another attribute that gets created for you automatically, this way we are sending it to the pixel shader through the varying vec2 vUv.
//this is the transformation
//projection matrix is what transforms space into perspective (vanishing points, things get smaller as they get further away from the camera)
//modelViewMatrix are actually two matrices, viewMatrix, which is also part of the camera (how is this camera rotated and moved compared to the rest of the world)
//finally the modelMatrix - how big is the object, where it stands, and how it's rotated
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4( position , 1.0 ); //will do the same thing
}
每 material 你制作的三个,都有着色器的这一部分。这不足以做光照,因为它没有法线。
试试这个片段着色器:
varying vec2 vUv; //coming in from the vertex shader
void main(){
gl_FragColor = vec4( vUv , 0.0 , 1.0);
}
或者更好的是,让我们用颜色显示对象的世界位置:
顶点着色器:
varying vec3 vertexWorldPosition;
void main(){
vec4 worldPosition = modelMatrix * vec4( position , 1.0 ); //compute the world position, remember it,
//model matrix is mat4 that transforms the object from object space to world space, vec4( vec3 , 1.0 ) creates a point rather than a direction in "homogeneous coordinates"
//since we only need this to be vec4 for transformations and working with mat4, we save the vec3 portion of it to the varying variable
vertexWorldPosition = worldPosition.xyz; // we don't need .w
//do the rest of the transformation - what is this world space seen from the camera's point of view,
gl_Position = viewMatrix * worldPosition;
//we used gl_Position to write the previous result, we could have used a new vec4 cameraSpace (or eyeSpace, or viewSpace) but we can also write to gl_Position
gl_Position = projectionMatrix * gl_Position; //apply perspective distortion
}
片段着色器:
varying vec3 vertexWorldPosition; //this comes in from the vertex shader
void main(){
gl_FragColor = vec4( vertexWorldPosition , 1.0 );
}
如果您在 0,0,0 处创建一个球体并且不移动它,一半将是黑色的,另一半将是彩色的。根据比例,它可能是白色的。假设半径为 100,您将看到从 0 到 1 的渐变,其余部分将为白色(或 r、g、b,固定为 1.0)。然后尝试这样的事情
gl_FragColor = vec4( vec3( sin( vertexWorldPosition.x ) ), 1.0 );
我正在学习 WebGL 着色器,但它们让我感到困惑。这是我到目前为止得到的:
<script type="x-shader/x-vertex" id="vertexshader">
#ifdef GL_ES
precision highp float;
#endif
void main()
{
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
#ifdef GL_ES
precision highp float;
#endif
void main()
{
gl_FragColor = vec4(1.0, 0.0, 1.0, 1.0);
}
</script>
到目前为止一切顺利,编译成功,我得到了一个粉红色的立方体。
现在混乱了。据我了解,片段着色器用于修改颜色,顶点着色器用于修改形状。
我不明白 gl_FragColor 是为整个对象设置颜色,还是按照某种顺序绘制,以便我可以在着色器中操纵坐标,使其随机着色?
如果是这样,它怎么知道什么形状和着色顺序?
此外,如果我只想使用片段着色器,为什么还需要定义顶点着色器?默认的 gl_Position 行有什么作用,为什么需要它?
到目前为止我试过的所有 GLSL 教程,代码都无法运行,three.js 编译失败。从哪里开始有什么建议吗?
这个问题很宽泛。
假设你做这样的事情:
var myRenderer = new THREE.WebGLRenderer();
var myScene = new THREE.Scene();
var myTexture = new THREE.Texture();
var myColor = new THREE.Color();
var myMaterial = new THREE.MeshBasicMaterial({color:myColor, map:myTexture});
var myColoredAndTexturedCube = new THREE.Mesh( new THREE.CubeGeometry(), myMaterial);
var myCamera = new THREE.Camera();
如果你连接所有这些,你将在屏幕上渲染一个立方体,如果你提供颜色和纹理,它会显示两者(纹理被颜色着色)。
虽然在幕后发生了很多事情。 Three.js 将通过 WebGL API 向 gpu 发出指令。这些是非常低级的调用,例如 'take this chunk of memory and have it ready to be drawn' 'prepare this shader to process this chunk of memory' 'set blending mode for this call'.
I don't understand does the gl_FragColor sets the color for the whole object or it's drawn in some kind of order where I could manipulate the cordinates in the shader so that it gets colored randomply for example?
If so, how does it know what shape and the coloring order?
你应该了解一下渲染管线,也许一开始你不会完全理解它,但它肯定可以澄清一些事情。
gl_FragColor 设置缓冲区中像素的颜色(可以是您的屏幕,可以是屏幕外纹理)。是的,它设置了 'entire object' 的颜色,但是整个对象可以是粒子云(您可以将其解释为多个对象)。您可以有一个 10x10 立方体的网格,每个立方体的颜色不同,但仍然通过一次绘制调用(一个对象)进行渲染。
所以回到你的 shder:
//you dont see this, but three injects this for you, try intentionally adding a mistake to your shader, when your debugger complains, youll see the entire shader and these lines in it
uniform mat4 projectionMatrix; //one mat4 shared across all vertices/pixels
uniform mat4 modelViewMatrix; //one mat4 shared across all vertices/pixels
attribute vec3 position; //actual vertex, this value is different in each vertex
//try adding this
varying vec2 vUv;
void main()
{
vUv = uv; //uv, just like position, is another attribute that gets created for you automatically, this way we are sending it to the pixel shader through the varying vec2 vUv.
//this is the transformation
//projection matrix is what transforms space into perspective (vanishing points, things get smaller as they get further away from the camera)
//modelViewMatrix are actually two matrices, viewMatrix, which is also part of the camera (how is this camera rotated and moved compared to the rest of the world)
//finally the modelMatrix - how big is the object, where it stands, and how it's rotated
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4( position , 1.0 ); //will do the same thing
}
每 material 你制作的三个,都有着色器的这一部分。这不足以做光照,因为它没有法线。
试试这个片段着色器:
varying vec2 vUv; //coming in from the vertex shader
void main(){
gl_FragColor = vec4( vUv , 0.0 , 1.0);
}
或者更好的是,让我们用颜色显示对象的世界位置:
顶点着色器:
varying vec3 vertexWorldPosition;
void main(){
vec4 worldPosition = modelMatrix * vec4( position , 1.0 ); //compute the world position, remember it,
//model matrix is mat4 that transforms the object from object space to world space, vec4( vec3 , 1.0 ) creates a point rather than a direction in "homogeneous coordinates"
//since we only need this to be vec4 for transformations and working with mat4, we save the vec3 portion of it to the varying variable
vertexWorldPosition = worldPosition.xyz; // we don't need .w
//do the rest of the transformation - what is this world space seen from the camera's point of view,
gl_Position = viewMatrix * worldPosition;
//we used gl_Position to write the previous result, we could have used a new vec4 cameraSpace (or eyeSpace, or viewSpace) but we can also write to gl_Position
gl_Position = projectionMatrix * gl_Position; //apply perspective distortion
}
片段着色器:
varying vec3 vertexWorldPosition; //this comes in from the vertex shader
void main(){
gl_FragColor = vec4( vertexWorldPosition , 1.0 );
}
如果您在 0,0,0 处创建一个球体并且不移动它,一半将是黑色的,另一半将是彩色的。根据比例,它可能是白色的。假设半径为 100,您将看到从 0 到 1 的渐变,其余部分将为白色(或 r、g、b,固定为 1.0)。然后尝试这样的事情
gl_FragColor = vec4( vec3( sin( vertexWorldPosition.x ) ), 1.0 );