如何训练 Pytorch 网络
How to train a Pytorch net
我正在使用 this Segnet 的 Pytorch 实现以及我为对象分割找到的预训练值,它工作正常。
现在我想使用具有相似图像的新数据集从我拥有的值恢复训练。
我该怎么做?
我想我必须使用在存储库中找到的 "train.py" 文件,但我不知道要写什么来替换 "fill the batch" 注释。
这是代码的那部分:
def train(epoch):
model.train()
# update learning rate
lr = args.lr * (0.1 ** (epoch // 30))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
# define a weighted loss (0 weight for 0 label)
weights_list = [0]+[1 for i in range(17)]
weights = np.asarray(weights_list)
weigthtorch = torch.Tensor(weights_list)
if(USE_CUDA):
loss = nn.CrossEntropyLoss(weight=weigthtorch).cuda()
else:
loss = nn.CrossEntropyLoss(weight=weigthtorch)
total_loss = 0
# iteration over the batches
batches = []
for batch_idx,batch_files in enumerate(tqdm(batches)):
# containers
batch = np.zeros((args.batch_size,input_nbr, imsize, imsize), dtype=float)
batch_labels = np.zeros((args.batch_size,imsize, imsize), dtype=int)
# fill the batch
# ...
# What should I write here?
batch_th = Variable(torch.Tensor(batch))
target_th = Variable(torch.LongTensor(batch_labels))
if USE_CUDA:
batch_th =batch_th.cuda()
target_th = target_th.cuda()
# initilize gradients
optimizer.zero_grad()
# predictions
output = model(batch_th)
# Loss
output = output.view(output.size(0),output.size(1), -1)
output = torch.transpose(output,1,2).contiguous()
output = output.view(-1,output.size(2))
target = target.view(-1)
l_ = loss(output.cuda(), target)
total_loss += l_.cpu().data.numpy()
l_.cuda()
l_.backward()
optimizer.step()
return total_loss/len(files)
如果我不得不猜测他可能制作了一些扩展 Pytorch Dataloader 的 Dataloader 馈线 class。看
https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
在页面底部附近,您可以看到他们在数据加载器上循环的示例
for i_batch, sample_batched in enumerate(dataloader):
例如,这对图像的要求是:
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=False, transform=transform_train)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batchSize, shuffle=True, num_workers=2)
for batch_idx, (inputs, targets) in enumerate(trainloader):
# Using the pytorch data loader the inputs and targets are given
# automatically
inputs, targets = inputs.cuda(), targets.cuda()
optimizer.zero_grad()
inputs, targets = Variable(inputs), Variable(targets)
我不知道作者究竟是如何加载他的文件的。不过,您可以按照以下步骤操作:https://pytorch.org/tutorials/beginner/data_loading_tutorial.html 制作您自己的 Dataloader。
我正在使用 this Segnet 的 Pytorch 实现以及我为对象分割找到的预训练值,它工作正常。 现在我想使用具有相似图像的新数据集从我拥有的值恢复训练。 我该怎么做?
我想我必须使用在存储库中找到的 "train.py" 文件,但我不知道要写什么来替换 "fill the batch" 注释。 这是代码的那部分:
def train(epoch):
model.train()
# update learning rate
lr = args.lr * (0.1 ** (epoch // 30))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
# define a weighted loss (0 weight for 0 label)
weights_list = [0]+[1 for i in range(17)]
weights = np.asarray(weights_list)
weigthtorch = torch.Tensor(weights_list)
if(USE_CUDA):
loss = nn.CrossEntropyLoss(weight=weigthtorch).cuda()
else:
loss = nn.CrossEntropyLoss(weight=weigthtorch)
total_loss = 0
# iteration over the batches
batches = []
for batch_idx,batch_files in enumerate(tqdm(batches)):
# containers
batch = np.zeros((args.batch_size,input_nbr, imsize, imsize), dtype=float)
batch_labels = np.zeros((args.batch_size,imsize, imsize), dtype=int)
# fill the batch
# ...
# What should I write here?
batch_th = Variable(torch.Tensor(batch))
target_th = Variable(torch.LongTensor(batch_labels))
if USE_CUDA:
batch_th =batch_th.cuda()
target_th = target_th.cuda()
# initilize gradients
optimizer.zero_grad()
# predictions
output = model(batch_th)
# Loss
output = output.view(output.size(0),output.size(1), -1)
output = torch.transpose(output,1,2).contiguous()
output = output.view(-1,output.size(2))
target = target.view(-1)
l_ = loss(output.cuda(), target)
total_loss += l_.cpu().data.numpy()
l_.cuda()
l_.backward()
optimizer.step()
return total_loss/len(files)
如果我不得不猜测他可能制作了一些扩展 Pytorch Dataloader 的 Dataloader 馈线 class。看 https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
在页面底部附近,您可以看到他们在数据加载器上循环的示例
for i_batch, sample_batched in enumerate(dataloader):
例如,这对图像的要求是:
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=False, transform=transform_train)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batchSize, shuffle=True, num_workers=2)
for batch_idx, (inputs, targets) in enumerate(trainloader):
# Using the pytorch data loader the inputs and targets are given
# automatically
inputs, targets = inputs.cuda(), targets.cuda()
optimizer.zero_grad()
inputs, targets = Variable(inputs), Variable(targets)
我不知道作者究竟是如何加载他的文件的。不过,您可以按照以下步骤操作:https://pytorch.org/tutorials/beginner/data_loading_tutorial.html 制作您自己的 Dataloader。