(光线追踪)转换为屏幕坐标时出现问题,对象被拉伸

(Ray tracing) Trouble converting to screen coordinates, objects being stretched

我跟着 一个周末的光线追踪 并设法获得了最终输出,但我想了解更多关于创建相机和“绘制”屏幕的信息,因为他没有多说一遍。

当我尝试使用不同的方式通过球体创建相机时,实际上会拉长,使它们看起来更像椭圆。我尝试修改 screenCords 中的 x 和 y 赋值,但我只设法犯了更多错误(例如对象环绕到另一侧)

Camera.h:

#pragma once

#include "../Matrix.h"
#include "../Defs.h"
#include "Defs.h"

template<typename O>
using Point3 = Vec3<O>;

template<typename O>
using Color = Vec3<O>;

template <typename O>
class Camera{
  O Height;
  O Width;
  Vec3<O> Forward, Right, Up;
  Point3<O> Origin;

public:
  Camera(O fov, O aspect_ratio, Point3<O> origin, Point3<O> target, Vec3<O> upguide) {
    Height = atan(degrees_to_radians(fov));
    Width = Height * aspect_ratio;
    
    Origin = origin;

    Forward = target - origin;
    Forward.normalize();
    Right = Forward.cross(upguide);
    Right.normalize();
    Up = Right.cross(Forward);

    }

    Ray<O> get_raydir(O right, O up){
      Vec3<O> result(Forward + right * Width * Right + up * Height * Up); result.normalize();

      return Ray<O>(Origin, result);
    }

    void screenCords(O &x, O &y, O width, O height){
      x = ((2.0f * x) / width) -1.0f;
      y = ((2.0f * y) / height); 
    }
};

Main.cpp

#include <iostream>
#include <cmath>
#include "../Matrix.h"
#include "Camera.h"
#include <vector>
#include "Image.h"
#include "Shapes.h"
#include "Tracer.h"
#include "../Defs.h"

template<typename O>
using Point3 = Vec3<O>;

template<typename O>
using Color = Vec3<O>;

int main(){
  const int img_ratio = 2;
  const int img_width = 640;
  const int img_height = 480;
  const int depth = 50; float t_Max = infinity; float t_Min = 0.001;

  float inv_width = 1 / float(img_width);
  float inv_height = 1 / float(img_height);

  std::vector<Sphere<float>> shapes;

  Camera<float> cam1(20.0f, img_ratio, Point3<float>(0.0f, 0.0f, 0.0f), Point3<float>(0.0f, 0.0f, -1.0f), Vec3<float>(0.0f, 1.0f, 0.0f));

  Sphere<float> cir1(0.2f, Point3<float>(0.2f, 0.0f, -1.0f));
  Sphere<float> cir2(7.0f, Point3<float>(0.0f, -7.0f, -1.0f));
  Sphere<float> cir3(0.5f, Point3<float>(1.0f, 0.0f, -1.0f));
  shapes.push_back(cir1);
  //shapes.push_back(cir2);
  //shapes.push_back(cir3);

  Tracer<float> tracer(shapes);

  std::cout << "P3\n" << img_width << ' ' << img_height << "\n255" << std::endl;

  Ray<float> ray(Point3<float>(0.0f), Vec3<float>(0.0f));

  for (int j = 0; j < img_height; j++)
  {
    std::cerr << "\rScanlines remaining: " << j << ' ' << std::flush;
    for (int i = 0; i < img_width; i++){

        float x = i;
        float y = j;

        cam1.screenCords(x, y, img_width, img_height);

        ray = cam1.get_raydir(x, y);
        //ray = Ray<float>(Vec3<float>(x1, y1, 1), Point3<float>(0.0f, 0.0f, 0.0f));
        tracer.iterator(ray, depth, t_Max, t_Min);
    }
  }
  std::cerr << "\n done " << std::endl;
}

我怀疑错误出在其中一个文件中,因为球体实际上是用基于法线的颜色绘制的(顶部和底部的法线颜色不出所料地被窃听了)

下面是几个输出示例:

你应该定义

const float img_ratio = (float)img_width/img_height;

对于 640x480 的图像,将是 1.333 而不是您的代码中的 2

同样在 screenCords 中,您从 x 中减去 1.0f,而不是从 y 中减去。它会产生倾斜移位效果。