如何在 PyTorch 中结合图像张量 (4D) 和深度张量 (4D) 创建 5D 张量 [批量大小、通道、深度、高度、宽度]?
How to combine an image tensor (4D) and a depth tensor (4D) to create a 5D tensor [batch size, channels, depth, height, width] in PyTorch?
在训练期间,我加载图像和视差数据。图像张量的形状为:[2, 3, 256, 256],disparity/depth 张量的形状为:[2, 1, 256, 256](批量大小、通道、高度、宽度)。
我想使用 Conv3D,所以我需要组合这两个张量并创建一个新的形状张量:[2, 3, 256, 256, 256](批量大小、通道、深度、高度、宽度)。
深度值的范围是 0-400,一种可能性是将其划分为间隔,例如 4 个 100 的间隔。我希望生成的张量看起来像体素,类似于此 paper 中使用的技术。迭代数据的训练循环如下:
for batch_id, sample in enumerate(train_loader):
sample = {name: tensor.cuda() for name, tensor in sample.items()}
# image tensor [2, 3, 256, 256]
rgb_image = transforms.Lambda(lambda x: x.mul(255))(sample["frame"])
# translate disparity to depth
depth_from_disparity_frame = 132.28 / sample["disparity_frame"]
# depth tensor [2, 1, 256, 256]
depth_image = depth_from_disparity_frame.unsqueeze(1)
来自您链接的文章:
We create a
3D voxel representation, with the same height and width as
the original image, and with a depth determined by the difference between the maximum and minimum depth values
found in the images. Each RGB-D pixel of an image is then
placed at the same position in the voxel grid but at its corresponding depth.
这是 Ivan 或多或少建议的。如果你知道你的深度将永远是 0-400,我想你可以跳过“深度由最大和最小深度值之差决定的深度”的第一部分。这总是可以规范化 before-hand 或更高版本。
使用虚拟数据的代码:
import torch
import torch.nn.functional as F
# Declarations (dummy tensors)
rgb_im = torch.randint(0, 255, [1, 3, 256, 256])
depth = torch.randint(0, 400, [1, 1, 256, 256])
# Calculations
depth_ohe = F.one_hot(depth, num_classes=400) # of shape (batch, channel, height, width, binary)
bchwd_tensor = rgb_im.unsqueeze(-1)*depth_ohe # of shape (batch, channel, height, width, depth)
bcdhw_tensor = bchwd_tensor.permute(0, 1, 4, 2, 3) # of shape (batch, channel, depth, height, width)
在训练期间,我加载图像和视差数据。图像张量的形状为:[2, 3, 256, 256],disparity/depth 张量的形状为:[2, 1, 256, 256](批量大小、通道、高度、宽度)。 我想使用 Conv3D,所以我需要组合这两个张量并创建一个新的形状张量:[2, 3, 256, 256, 256](批量大小、通道、深度、高度、宽度)。 深度值的范围是 0-400,一种可能性是将其划分为间隔,例如 4 个 100 的间隔。我希望生成的张量看起来像体素,类似于此 paper 中使用的技术。迭代数据的训练循环如下:
for batch_id, sample in enumerate(train_loader):
sample = {name: tensor.cuda() for name, tensor in sample.items()}
# image tensor [2, 3, 256, 256]
rgb_image = transforms.Lambda(lambda x: x.mul(255))(sample["frame"])
# translate disparity to depth
depth_from_disparity_frame = 132.28 / sample["disparity_frame"]
# depth tensor [2, 1, 256, 256]
depth_image = depth_from_disparity_frame.unsqueeze(1)
来自您链接的文章:
We create a 3D voxel representation, with the same height and width as the original image, and with a depth determined by the difference between the maximum and minimum depth values found in the images. Each RGB-D pixel of an image is then placed at the same position in the voxel grid but at its corresponding depth.
这是 Ivan 或多或少建议的。如果你知道你的深度将永远是 0-400,我想你可以跳过“深度由最大和最小深度值之差决定的深度”的第一部分。这总是可以规范化 before-hand 或更高版本。
使用虚拟数据的代码:
import torch
import torch.nn.functional as F
# Declarations (dummy tensors)
rgb_im = torch.randint(0, 255, [1, 3, 256, 256])
depth = torch.randint(0, 400, [1, 1, 256, 256])
# Calculations
depth_ohe = F.one_hot(depth, num_classes=400) # of shape (batch, channel, height, width, binary)
bchwd_tensor = rgb_im.unsqueeze(-1)*depth_ohe # of shape (batch, channel, height, width, depth)
bcdhw_tensor = bchwd_tensor.permute(0, 1, 4, 2, 3) # of shape (batch, channel, depth, height, width)