Direct3D11:在 3D 场景中渲染 2D:当相机改变位置时,如何使 2D 对象不在视口上移动?

Direct3D11: Rendering 2D in 3D scene: how to make 2D objects not move on Viewport when camera changes position?

包含问题示例的图像:http://imgur.com/gallery/vmMyk

你好, 我需要一些帮助来使用 3D 相机在 3D 场景中渲染 2D 对象。我想我设法用 LH 世界坐标解决了 2D 坐标。但是,我渲染的 2D 对象位于正确的位置,只有当相机位于 [0.0f, 0.0f, 0.0f] 坐标时。在其他所有位置,二维对象在场景中的位置都是畸形的。我认为我的矩阵搞砸了,但不知道该往哪里看。我将不胜感激好主意,如果您缺少某些内容,请发表评论,我将编辑主要内容 post 以为您提供更多信息。

我正在使用简单的 3D 颜色 HLSL(VS 和 PS 版本:4.0)着色器,并为更大的三角形进行 alpha 混合:

cbuffer ConstantBuffer : register( b0 )
{
    matrix World;
    matrix View;
    matrix Projection;
}
struct VS_INPUT
{
    float4 Pos : POSITION;
    float4 Color : COLOR;
};
struct PS_INPUT
{
    float4 Pos : SV_POSITION;
    float4 Color : COLOR;
};

PS_INPUT VS ( VS_INPUT input )
{
    PS_INPUT output = (PS_INPUT)0;

    input.Pos.w = 1.0f;

    output.Pos = mul ( input.Pos, World );
    output.Pos = mul ( output.Pos, View );
    output.Pos = mul ( output.Pos, Projection );
    output.Color = input.Color;

    return output;
}

float4 PS ( PS_INPUT input ) : SV_Target
{
    return input.Color;
}

这是我的顶点数据结构:

  struct Vertex
  {
    DirectX::XMFLOAT3 position;
    DirectX::XMFLOAT4 color;

    Vertex() {};

    Vertex(DirectX::XMFLOAT3 aPosition, DirectX::XMFLOAT4 aColor) 
      : position(aPosition)
      , color(aColor) 
    {};
  };

对象的渲染调用:

bool PrimitiveMesh::Draw()
{
  unsigned int stride = sizeof(Vertex);
  unsigned int offset = 0;

  D3DSystem::GetD3DDeviceContext()->IASetVertexBuffers(0, 1, &iVertexBuffer, &stride, &offset);
  D3DSystem::GetD3DDeviceContext()->IASetIndexBuffer(iIndexBuffer, DXGI_FORMAT_R32_UINT, 0);
  D3DSystem::GetD3DDeviceContext()->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);

  return true;
}

带初始化的绘制调用:

    static PrimitiveMesh* mesh;
    if (mesh == 0)
    {
      std::vector<PrimitiveMesh::Vertex> vertices;
      mesh = new PrimitiveMesh();

      DirectX::XMFLOAT4 color = { 186 / 256.0f, 186 / 256.0f, 186 / 256.0f, 0.8f };
      vertices.push_back({ DirectX::XMFLOAT3(0.0f, 0.0f, 0.0f), color });
      vertices.push_back({ DirectX::XMFLOAT3(0.0f, 600.0f, 0.0f), color });
      vertices.push_back({ DirectX::XMFLOAT3(800.0f, 600.0f, 0.0f), color });

      mesh->SetVerticesAndIndices(vertices);
    }
    // Getting clean matrices here:
    D3D::Matrices(world, view, projection, ortho);
    iGI->TurnZBufferOff();
    iGI->TurnOnAlphaBlending();
    mesh->Draw();
    XMMATRIX view2D = Camera::View2D();

    iColorShader->Render(iGI->GetContext(), 3, &world, &view2D, &ortho);
    iGI->TurnZBufferOn();

这些是我对相机的 2D 计算:

  up = DirectX::XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f);
  lookAt = DirectX::XMVectorSet(0.0f, 0.0f, 1.0f, 0.0f);
  rotationMatrix = DirectX::XMMatrixRotationRollPitchYaw(0.0f, 0.0f, 0.0f); // (pitch, yaw, roll);

  up = DirectX::XMVector3TransformCoord(up, rotationMatrix);
  lookAt = DirectX::XMVector3TransformCoord(lookAt, rotationMatrix) + position;
  view2D = DirectX::XMMatrixLookAtLH(position, lookAt, up);

我将不胜感激任何帮助。 亲切的问候。

有了着色器,你就不会被迫使用矩阵,你可以灵活地简化问题。

假设您渲染 2d 对象,使用以像素为单位的坐标,唯一的要求是将它们缩放回标准化投影 space。

顶点着色器可以这么短:

cbuffer ConstantBuffer : register( b0 ) {
    float2 rcpDim; // 1 / renderTargetSize
}
PS_INPUT VS ( VS_INPUT input ) {
    PS_INPUT output;

    output.Pos.xy = input.Pos.xy * rcpDim * 2; // from pixel to [0..2]
    output.Pos.xy -= 1; // to [-1..1]
    output.Pos.y *= -1; // because top left in texture space is bottom left in projective space
    output.Pos.zw = float2(0,1);
    output.Color = input.Color;
    return output;
}

您当然可以构建一组矩阵来获得与原始着色器相同的结果,只需将 World 和 View 设置为 identity 并将投影设置为 XMMatrixOrthographicOffCenterLH(0,width,0,height,0,1) 的正射投影。但是由于您是 3D 编程的新手,无论如何您很快就必须学会处理多个着色器,因此请将其作为练习。

嗯,我解决了我的问题。由于某些奇怪的原因,DirectXMath 生成了错误的 XMMATRIX。我的 XMMatrixOrtographicLH() 对于好的参数完全不正确。我用正交矩阵的经典定义解决了我的问题,found in this article(图 10 中的定义)

auto orthoMatrix = DirectX::XMMatrixIdentity();
orthoMatrix.r[0].m128_f32[0] = 2.0f / Engine::VideoSettings::Current()->WindowWidth();
orthoMatrix.r[1].m128_f32[1] = 2.0f / Engine::VideoSettings::Current()->WindowHeight();
orthoMatrix.r[2].m128_f32[2] = -(2.0f / (screenDepth - screenNear));
orthoMatrix.r[2].m128_f32[3] = -(screenDepth + screenNear) / (screenDepth - screenNear);

galop1n 给出了一个很好的解决方案,但在我的系统上

cbuffer ConstantBuffer : register( b0 ) { float2 rcpDim; // 1 / renderTargetSize }

需要是 16 的倍数才能像这里一样制作:

struct VS_CONSTANT_BUFFER
{
    DirectX::XMFLOAT2 rcpDim;
    DirectX::XMFLOAT2 rcpDim2;
};

// Supply the vertex shader constant data.
VS_CONSTANT_BUFFER VsConstData;
VsConstData.rcpDim = { 2.0f / w,2.0f / h};

// Fill in a buffer description.
D3D11_BUFFER_DESC cbDesc;
ZeroMemory(&cbDesc, sizeof(cbDesc));
cbDesc.ByteWidth = sizeof(VS_CONSTANT_BUFFER);
cbDesc.Usage = D3D11_USAGE_DYNAMIC;
cbDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
cbDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
cbDesc.MiscFlags = 0;
cbDesc.StructureByteStride = 0;

// Fill in the subresource data.
D3D11_SUBRESOURCE_DATA InitData;
ZeroMemory(&InitData, sizeof(InitData));
InitData.pSysMem = &VsConstData;
InitData.SysMemPitch = 0;
InitData.SysMemSlicePitch = 0;

// Create the buffer.
HRESULT hr = pDevice->CreateBuffer(&cbDesc, &InitData,
    &pConstantBuffer11);

或对齐

__declspec(align(16))
struct VS_CONSTANT_BUFFER
{
    DirectX::XMFLOAT2 rcpDim;
};