将 python 列表转换为张量 pytorch
convert a python list to tensor pytorch
我想将像素值列表转换为张量,但出现错误。我的代码计算图像中每个检测到的对象的像素值 (RGB)。我们如何将列表转换为张量?
我的代码:
cropped_images =[]
imgs = PIL.Image.open(img_path).convert('RGB')
#print(img_path)
image_width, image_height = imgs.size
imgArrays = np.array(imgs)
X = (xCenter*image_width)
Y = (yCenter*image_height)
W = (Width*image_width)
H = (Height*image_height)
cropped_image = np.zeros((image_height, image_width))
for i in range(len(X)):
x1, y1, w, h = X[i], Y[i], W[i], H[i]
x_start = int(x1 - (w/2))
y_start = int(y1 - (h/2))
x_end = int(x_start + w)
y_end = int(y_start + h)
temp = imgArrays[y_start: y_end, x_start: x_end]
cropped_image_pixels = torch.as_tensor(temp)
cropped_images.append(cropped_image_pixels)
stacked_tensor = torch.stack(cropped_images)
print(stacked_tensor)
错误:
RuntimeError Traceback (most recent call last)
<ipython-input-82-653a155c3b71> in <module>()
130
131 if __name__=="__main__":
--> 132 main()
2 frames
<ipython-input-80-670335a0656c> in __getitem__(self, idx)
76 cropped_image_pixels = torch.as_tensor(temp)
77 cropped_images.append(cropped_image_pixels)
---> 78 stacked_tensor = torch.stack(cropped_images)
79
80 print(stacked_tensor)
RuntimeError: stack expects each tensor to be equal size, but got [506, 343, 3] at entry 0 and [520, 334, 3] at entry 1
张量列表有两个张量
很明显两者的尺寸不同
torch.stack(tensors, dim=0, *, out=None) → Tensor
Concatenates a sequence of tensors along a new dimension.
All tensors need to be of the same size.
你可以使用这个伪代码
import torchvision.transforms as transforms
.
.
.
.
temp=[]
for img_name in LIST:
img=cv2.resize(img,(H,W))
temp.append(img)
train_x=np.asarray(temp)
transform = transforms.Compose(
[transforms.ToTensor(),
我想将像素值列表转换为张量,但出现错误。我的代码计算图像中每个检测到的对象的像素值 (RGB)。我们如何将列表转换为张量?
我的代码:
cropped_images =[]
imgs = PIL.Image.open(img_path).convert('RGB')
#print(img_path)
image_width, image_height = imgs.size
imgArrays = np.array(imgs)
X = (xCenter*image_width)
Y = (yCenter*image_height)
W = (Width*image_width)
H = (Height*image_height)
cropped_image = np.zeros((image_height, image_width))
for i in range(len(X)):
x1, y1, w, h = X[i], Y[i], W[i], H[i]
x_start = int(x1 - (w/2))
y_start = int(y1 - (h/2))
x_end = int(x_start + w)
y_end = int(y_start + h)
temp = imgArrays[y_start: y_end, x_start: x_end]
cropped_image_pixels = torch.as_tensor(temp)
cropped_images.append(cropped_image_pixels)
stacked_tensor = torch.stack(cropped_images)
print(stacked_tensor)
错误:
RuntimeError Traceback (most recent call last)
<ipython-input-82-653a155c3b71> in <module>()
130
131 if __name__=="__main__":
--> 132 main()
2 frames
<ipython-input-80-670335a0656c> in __getitem__(self, idx)
76 cropped_image_pixels = torch.as_tensor(temp)
77 cropped_images.append(cropped_image_pixels)
---> 78 stacked_tensor = torch.stack(cropped_images)
79
80 print(stacked_tensor)
RuntimeError: stack expects each tensor to be equal size, but got [506, 343, 3] at entry 0 and [520, 334, 3] at entry 1
张量列表有两个张量 很明显两者的尺寸不同
torch.stack(tensors, dim=0, *, out=None) → Tensor
Concatenates a sequence of tensors along a new dimension.
All tensors need to be of the same size.
你可以使用这个伪代码
import torchvision.transforms as transforms
.
.
.
.
temp=[]
for img_name in LIST:
img=cv2.resize(img,(H,W))
temp.append(img)
train_x=np.asarray(temp)
transform = transforms.Compose(
[transforms.ToTensor(),