将numpy数组图像输入pytorch神经网络

inputing numpy array images into pytorch neural net

我有一个图像的 numpy 数组表示,我想将它变成一个张量,以便我可以通过我的 pytorch 神经网络提供它。

我知道神经网络接受变换后的张量,这些张量不是 [100,100,3] 而是 [3,100,100] 并且像素被重新缩放并且图像必须成批处理。

所以我做了以下事情:

import cv2
my_img = cv2.imread('testset/img0.png')
my_img.shape #reuturns [100,100,3] a 3 channel image with 100x100 resolution
my_img = np.transpose(my_img,(2,0,1))
my_img.shape #returns [3,100,100] 
#convert the numpy array to tensor
my_img_tensor = torch.from_numpy(my_img)
#rescale to be [0,1] like the data it was trained on by default 
my_img_tensor *= (1/255)
#turn the tensor into a batch of size 1
my_img_tensor = my_img_tensor.unsqueeze(0)
#send image to gpu 
my_img_tensor.to(device)
#put forward through my neural network.
net(my_img_tensor)

然而这个returns错误:

RuntimeError: _thnn_conv2d_forward is not implemented for type torch.ByteTensor

问题是您提供给网络的输入是 ByteTensor 类型,而对于类似 conv 的操作只实现了 float 操作。尝试以下

my_img_tensor = my_img_tensor.type('torch.DoubleTensor')
# for converting to double tensor

来源PyTorch Discussion Forum

感谢AlbanD