基于四元数的第一人称视角相机
Quaternion-based First Person View Camera
我一直在按照位于 https://paroj.github.io/gltut/ 的教程学习 OpenGL。
通过基础知识,我在理解四元数及其与空间方向和变换的关系方面有点卡住了,尤其是从世界到相机-space,反之亦然。在Camera-Relative Orientation一章中,作者制作了一个相机,它在worldspace中相对于相机方向旋转模型.引用:
We want to apply an orientation offset (R), which takes points in camera-space. If we wanted to apply this to the camera matrix, it would simply be multiplied by the camera matrix: R * C * O * p. That's nice and all, but we want to apply a transform to O, not to C.
我没有根据的猜测是,如果我们将偏移应用于相机 space,我们将获得第一人称相机。这个对吗?相反,偏移量应用于 world space 中的模型,使 spaceship 相对于 space 而不是相机 space。我们只是观察它从相机旋转 space.
受至少对四元数的一些理解(或者我认为如此)的启发,我尝试实现第一人称相机。它有两个属性:
struct Camera{
glm::vec3 position; // Position in world space.
glm::quat orientation; // Orientation in world space.
}
位置根据键盘操作而修改,而方向由于鼠标在屏幕上的移动而改变。
注意:GLM 为 glm::quat * glm::vec3
重载了 *
运算符,其中包含将向量旋转四元数的关系(v' = qvq^-1
的更紧凑形式)
例如向前和向右移动:
glm::vec3 worldOffset;
float scaleFactor = 0.5f;
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) {
worldOffset = orientation * (axis_vectors[AxisVector::AXIS_Z_NEG]); // AXIS_Z_NEG = glm::vec3(0, 0, -1)
position += worldOffset * scaleFactor;
}
if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS) {
worldOffset = orientation * (axis_vectors[AxisVector::AXIS_X_NEG]); // AXIS_Z_NEG = glm::vec3(-1, 0, 0)
position += worldOffset * scaleFactor;
}
方向和位置信息被传递到 glm::lookAt
矩阵以构建世界到相机的转换,如下所示:
auto camPosition = position;
auto camForward = orientation * glm::vec3(0.0, 0.0, -1.0);
viewMatrix = glm::lookAt(camPosition, camPosition + camForward, glm::vec3(0.0, 1.0, 0.0));
组合模型、视图和投影矩阵并将结果传递给顶点着色器显示一切正常 - 人们期望从第一人称 POV 中看到事物的方式。但是,当我添加鼠标移动并跟踪 x 和 y 方向的移动量时,事情变得一团糟。我想围绕 world y 轴和 local x 轴旋转:
auto xOffset = glm::angleAxis(xAmount, axis_vectors[AxisVector::AXIS_Y_POS]); // mouse movement in x-direction
auto yOffset = glm::angleAxis(yAmount, axis_vectors[AxisVector::AXIS_X_POS]); // mouse movement in y-direction
orientation = orientation * xOffset; // Works OK, can look left/right
orientation = yOffset * orientation; // When adding this line, things get ugly
这里会有什么问题?
我承认,我没有足够的知识来正确调试鼠标移动代码,我主要是跟着台词说 "right multiply to apply the offset in world space, left multiply to do it in camera space."
我觉得我对事情的认识是半途而废的,从大量关于该主题的电子资源中得出结论,同时受过更多教育,也更困惑。
感谢您的任何回答。
要旋转表示方向的 glm 四元数:
//Precomputation:
//pitch (rot around x in radians),
//yaw (rot around y in radians),
//roll (rot around z in radians)
//are computed/incremented by mouse/keyboard events
计算视图矩阵:
void CameraFPSQuaternion::UpdateView()
{
//FPS camera: RotationX(pitch) * RotationY(yaw)
glm::quat qPitch = glm::angleAxis(pitch, glm::vec3(1, 0, 0));
glm::quat qYaw = glm::angleAxis(yaw, glm::vec3(0, 1, 0));
glm::quat qRoll = glm::angleAxis(roll,glm::vec3(0,0,1));
//For a FPS camera we can omit roll
glm::quat orientation = qPitch * qYaw;
orientation = glm::normalize(orientation);
glm::mat4 rotate = glm::mat4_cast(orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
viewMatrix = rotate * translate;
}
如果要存储四元数,则只要偏航、俯仰或滚动发生变化,就重新计算它:
void CameraFPSQuaternion::RotatePitch(float rads) // rotate around cams local X axis
{
glm::quat qPitch = glm::angleAxis(rads, glm::vec3(1, 0, 0));
m_orientation = glm::normalize(qPitch) * m_orientation;
glm::mat4 rotate = glm::mat4_cast(m_orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
m_viewMatrix = rotate * translate;
}
如果你想给出绕给定轴的旋转速度,你可以使用 slerp:
void CameraFPSQuaternion::Update(float deltaTimeSeconds)
{
//FPS camera: RotationX(pitch) * RotationY(yaw)
glm::quat qPitch = glm::angleAxis(m_d_pitch, glm::vec3(1, 0, 0));
glm::quat qYaw = glm::angleAxis(m_d_yaw, glm::vec3(0, 1, 0));
glm::quat qRoll = glm::angleAxis(m_d_roll,glm::vec3(0,0,1));
//For a FPS camera we can omit roll
glm::quat m_d_orientation = qPitch * qYaw;
glm::quat delta = glm::mix(glm::quat(0,0,0,0),m_d_orientation,deltaTimeSeconds);
m_orientation = glm::normalize(delta) * m_orientation;
glm::mat4 rotate = glm::mat4_cast(orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
viewMatrix = rotate * translate;
}
问题在于使用 glm::lookAt
构建视图矩阵。相反,我现在像这样构建视图矩阵:
auto rotate = glm::mat4_cast(entity->orientation);
auto translate = glm::mat4(1.0f);
translate = glm::translate(translate, -entity->position);
viewMatrix = rotate * translate;
为了翻译,我现在乘以方向的倒数而不是方向。
glm::quat invOrient = glm::conjugate(orientation);
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) {
worldOffset = invOrient * (axis_vectors[AxisVector::AXIS_Z_NEG]);
position += worldOffset * scaleFactor;
}
...
除了鼠标移动代码中的一些进一步的偏移四元数规范化之外,其他一切都是一样的。
相机现在的行为和感觉就像一个 first-person 相机。
我仍然没有正确理解视图矩阵和观察矩阵之间的区别(如果有的话)。但这是另一个问题的主题。
我一直在按照位于 https://paroj.github.io/gltut/ 的教程学习 OpenGL。
通过基础知识,我在理解四元数及其与空间方向和变换的关系方面有点卡住了,尤其是从世界到相机-space,反之亦然。在Camera-Relative Orientation一章中,作者制作了一个相机,它在worldspace中相对于相机方向旋转模型.引用:
We want to apply an orientation offset (R), which takes points in camera-space. If we wanted to apply this to the camera matrix, it would simply be multiplied by the camera matrix: R * C * O * p. That's nice and all, but we want to apply a transform to O, not to C.
我没有根据的猜测是,如果我们将偏移应用于相机 space,我们将获得第一人称相机。这个对吗?相反,偏移量应用于 world space 中的模型,使 spaceship 相对于 space 而不是相机 space。我们只是观察它从相机旋转 space.
受至少对四元数的一些理解(或者我认为如此)的启发,我尝试实现第一人称相机。它有两个属性:
struct Camera{
glm::vec3 position; // Position in world space.
glm::quat orientation; // Orientation in world space.
}
位置根据键盘操作而修改,而方向由于鼠标在屏幕上的移动而改变。
注意:GLM 为 glm::quat * glm::vec3
重载了 *
运算符,其中包含将向量旋转四元数的关系(v' = qvq^-1
的更紧凑形式)
例如向前和向右移动:
glm::vec3 worldOffset;
float scaleFactor = 0.5f;
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) {
worldOffset = orientation * (axis_vectors[AxisVector::AXIS_Z_NEG]); // AXIS_Z_NEG = glm::vec3(0, 0, -1)
position += worldOffset * scaleFactor;
}
if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS) {
worldOffset = orientation * (axis_vectors[AxisVector::AXIS_X_NEG]); // AXIS_Z_NEG = glm::vec3(-1, 0, 0)
position += worldOffset * scaleFactor;
}
方向和位置信息被传递到 glm::lookAt
矩阵以构建世界到相机的转换,如下所示:
auto camPosition = position;
auto camForward = orientation * glm::vec3(0.0, 0.0, -1.0);
viewMatrix = glm::lookAt(camPosition, camPosition + camForward, glm::vec3(0.0, 1.0, 0.0));
组合模型、视图和投影矩阵并将结果传递给顶点着色器显示一切正常 - 人们期望从第一人称 POV 中看到事物的方式。但是,当我添加鼠标移动并跟踪 x 和 y 方向的移动量时,事情变得一团糟。我想围绕 world y 轴和 local x 轴旋转:
auto xOffset = glm::angleAxis(xAmount, axis_vectors[AxisVector::AXIS_Y_POS]); // mouse movement in x-direction
auto yOffset = glm::angleAxis(yAmount, axis_vectors[AxisVector::AXIS_X_POS]); // mouse movement in y-direction
orientation = orientation * xOffset; // Works OK, can look left/right
orientation = yOffset * orientation; // When adding this line, things get ugly
这里会有什么问题? 我承认,我没有足够的知识来正确调试鼠标移动代码,我主要是跟着台词说 "right multiply to apply the offset in world space, left multiply to do it in camera space."
我觉得我对事情的认识是半途而废的,从大量关于该主题的电子资源中得出结论,同时受过更多教育,也更困惑。 感谢您的任何回答。
要旋转表示方向的 glm 四元数:
//Precomputation:
//pitch (rot around x in radians),
//yaw (rot around y in radians),
//roll (rot around z in radians)
//are computed/incremented by mouse/keyboard events
计算视图矩阵:
void CameraFPSQuaternion::UpdateView()
{
//FPS camera: RotationX(pitch) * RotationY(yaw)
glm::quat qPitch = glm::angleAxis(pitch, glm::vec3(1, 0, 0));
glm::quat qYaw = glm::angleAxis(yaw, glm::vec3(0, 1, 0));
glm::quat qRoll = glm::angleAxis(roll,glm::vec3(0,0,1));
//For a FPS camera we can omit roll
glm::quat orientation = qPitch * qYaw;
orientation = glm::normalize(orientation);
glm::mat4 rotate = glm::mat4_cast(orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
viewMatrix = rotate * translate;
}
如果要存储四元数,则只要偏航、俯仰或滚动发生变化,就重新计算它:
void CameraFPSQuaternion::RotatePitch(float rads) // rotate around cams local X axis
{
glm::quat qPitch = glm::angleAxis(rads, glm::vec3(1, 0, 0));
m_orientation = glm::normalize(qPitch) * m_orientation;
glm::mat4 rotate = glm::mat4_cast(m_orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
m_viewMatrix = rotate * translate;
}
如果你想给出绕给定轴的旋转速度,你可以使用 slerp:
void CameraFPSQuaternion::Update(float deltaTimeSeconds)
{
//FPS camera: RotationX(pitch) * RotationY(yaw)
glm::quat qPitch = glm::angleAxis(m_d_pitch, glm::vec3(1, 0, 0));
glm::quat qYaw = glm::angleAxis(m_d_yaw, glm::vec3(0, 1, 0));
glm::quat qRoll = glm::angleAxis(m_d_roll,glm::vec3(0,0,1));
//For a FPS camera we can omit roll
glm::quat m_d_orientation = qPitch * qYaw;
glm::quat delta = glm::mix(glm::quat(0,0,0,0),m_d_orientation,deltaTimeSeconds);
m_orientation = glm::normalize(delta) * m_orientation;
glm::mat4 rotate = glm::mat4_cast(orientation);
glm::mat4 translate = glm::mat4(1.0f);
translate = glm::translate(translate, -eye);
viewMatrix = rotate * translate;
}
问题在于使用 glm::lookAt
构建视图矩阵。相反,我现在像这样构建视图矩阵:
auto rotate = glm::mat4_cast(entity->orientation);
auto translate = glm::mat4(1.0f);
translate = glm::translate(translate, -entity->position);
viewMatrix = rotate * translate;
为了翻译,我现在乘以方向的倒数而不是方向。
glm::quat invOrient = glm::conjugate(orientation);
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) {
worldOffset = invOrient * (axis_vectors[AxisVector::AXIS_Z_NEG]);
position += worldOffset * scaleFactor;
}
...
除了鼠标移动代码中的一些进一步的偏移四元数规范化之外,其他一切都是一样的。
相机现在的行为和感觉就像一个 first-person 相机。
我仍然没有正确理解视图矩阵和观察矩阵之间的区别(如果有的话)。但这是另一个问题的主题。