PyTorch - 如何正确使用 "toPILImage"

PyTorch - How to use "toPILImage" correctly

我想知道我是否正确使用了 torchvision 中的 toPILImage。我想使用它,看看将初始图像转换应用于数据集后图像的外观。

当我在下面的代码中使用它时,出现的图像有奇怪的颜色,如 this one。原图为普通RGB图

这是我的代码:

import os
import torch
from PIL import Image, ImageFont, ImageDraw
import torch.utils.data as data
import torchvision
from torchvision import transforms    
import matplotlib.pyplot as plt

# Image transformations
normalize = transforms.Normalize(
    mean=[0.485, 0.456, 0.406],
    std=[0.229, 0.224, 0.225]
    )
transform_img = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(256),
    transforms.ToTensor(),
    normalize ])

train_data = torchvision.datasets.ImageFolder(
    root='./train_cl/',
    transform=transform_img
    )
test_data = torchvision.datasets.ImageFolder(
    root='./test_named_cl/',
    transform=transform_img                                             
    )

train_data_loader = data.DataLoader(train_data,
    batch_size=4,
    shuffle=True,
    num_workers=4) #num_workers=args.nThreads)

test_data_loader = data.DataLoader(test_data,
    batch_size=32,
    shuffle=False,
    num_workers=4)        

# Open Image from dataset:
to_pil_image = transforms.ToPILImage()
my_img, _ = train_data[248]
results = to_pil_image(my_img)
results.show()

编辑:

我不得不在 Torch 变量上使用 .data 来获取张量。 我还需要在转置之前重新缩放 numpy 数组。我找到了一个有效的解决方案 ,但它并不总是有效。我怎样才能做得更好?

for i, data in enumerate(train_data_loader, 0):
    img, labels = data
    img = Variable(img)
    break

image = img.data.cpu().numpy()[0]

# This worked for rescaling:
image = (1/(2*2.25)) * image + 0.5

# Both of these didn't work:
# image /= (image.max()/255.0)
# image *= (255.0/image.max())

image = np.transpose(image, (1,2,0))
plt.imshow(image)
plt.show() 

您可以使用 PIL 图像,但实际上并没有像往常一样加载数据。

试试这样的方法:

import numpy as np
import matplotlib.pyplot as plt

for img,labels in train_data_loader:
    # load a batch from train data
    break

# this converts it from GPU to CPU and selects first image
img = img.cpu().numpy()[0]
#convert image back to Height,Width,Channels
img = np.transpose(img, (1,2,0))
#show the image
plt.imshow(img)
plt.show()  

作为更新 (02-10-2021):

import torchvision.transforms.functional as F
# load the image (creating a random image as an example)
img_data = torch.ByteTensor(4, 4, 3).random_(0, 255).numpy()
pil_image = F.to_pil_image(img_data)

或者

import torchvision.transforms as transforms
img_data = torch.ByteTensor(4, 4, 3).random_(0, 255).numpy()
pil_image = transforms.ToPILImage()(img_data)

第二种形式可以与pytorch中的dataset loader集成,也可以直接调用

我加了修改to_pil_imagehere

本质上它执行了我在 2018 年建议的操作,但现在已集成到 pytorch 中。

我会用这样的东西

# Open Image from dataset:
my_img, _ = train_data[248]
results = transforms.ToPILImage()(my_img)
results.show()