LibTorch C++:将张量转换回图像,结果是 3x3 网格?

LibTorch C++: Converting Tensor back to Image, Result is 3x3 Grid?

我正在编写一些代码,它接受一个矩阵对象(假设它的行为与 cv::Mat 一样),转换为张量,通过我的模型进行前向传递,然后转换回我的矩阵对象。但是我有一个问题仍然存在。也就是说,我的结果矩阵是结果的 3x3 网格。我通过将图像传递给两者来测试我的转换代码(矩阵到张量和返回),结果图像是正确的。这让我相信前向传递如何创建输出张量?我应该如何使用输出张量来解决这个问题?

上下文代码

张量矩阵:

int numel = rows * cols * depth;
assert(numel > 0);

tensor_image = torch::zeros({ rows, cols, depth }, torch::kFloat);

std::memcpy(tensor_image.data_ptr<float>(), Image.GetConstDataPtr(), sizeof(float) * numel);

tensor_image = tensor_image.permute({ 2, 0, 1 }).unsqueeze(0);

前传:

at::Tensor output = torch::zeros({ tensor_image.sizes()[0], tensor_image.sizes()[1], tensor_image.sizes()[2], tensor_image.sizes()[3] }, torch::kFloat);

device = torch::kCUDA;

// Move model to GPU
module.to(device);

// Move target to gpu 
tensor_image = tensor_image.to(device);
// Execute the model and turn its output into a tensor.
at::Tensor output = module.forward({ tensor_image }).toTensor();

output = output.to(torch::kCPU);

张量到矩阵:

int numel = height * width * depth;
assert(numel > 0);

Image.Resize(height, width, depth);

at::Tensor tensor_image_cpy = tensor_image.squeeze(0).permute({ 1, 2, 0 });

std::memcpy(Image.GetDataPtr(), tensor_image_cpy.data_ptr<float>(), sizeof(float) * numel);

我找到了解决问题的方法。引用yugo对这个问题的回答:

Convert pytorch tensor to opencv mat and vice versa in C++

您需要确保重塑数据:

tensor_image_cpy = tensor_image.reshape({ width * height * depth });

希望这对 运行 解决此问题的其他人有所帮助。