在 python/pytorch 中没有将其作为参数的函数中调用对象对我来说如何工作?
How does it work for me to call an object in a function that doesnt have it as a parameter in python/pytorch?
我在 PyTorch 中学习神经网络,我遇到了:
#Loss function
criterion = nn.MSELoss()
#Optimizer
from torch import optim
optimizer = optim.Adam(MLP.parameters(), lr=args['lr'], weight_decay=args['weight_decay'])
def train(train_loader, MLP, epoch): #MLP is the model
MLP.train()
start = time.time()
epoch_loss = []
for batch in train_loader:
sample, label = batch
optimizer.zero_grad()
#Forward
pred = MLP(sample)
loss = criterion(pred, label)
epoch_loss.append(loss.data)
#Backward
loss.backward()
optimizer.step()
epoch_loss = np.asarray(epoch_loss)
end = time.time()
print('Epoch: {}, Loss: {:.4f} +/- {:.4f}, Time: {}'.format(epoch+1, epoch_loss.mean(), epoch_loss.std(), end-start))
return epoch_loss.mean()
好吧,“标准”和“优化器”是我没有像对模型 (MLP) 那样作为函数“训练”的参数传递的对象,但它起作用了。它适用于任何功能还是只是 PyTorch 的东西?
这不是 Pytorch 的东西,这些被称为 global(相对于本地)变量。如果您想掌握 Pytorch,我建议您更熟悉 Python 语言和一般编程。
我在 PyTorch 中学习神经网络,我遇到了:
#Loss function
criterion = nn.MSELoss()
#Optimizer
from torch import optim
optimizer = optim.Adam(MLP.parameters(), lr=args['lr'], weight_decay=args['weight_decay'])
def train(train_loader, MLP, epoch): #MLP is the model
MLP.train()
start = time.time()
epoch_loss = []
for batch in train_loader:
sample, label = batch
optimizer.zero_grad()
#Forward
pred = MLP(sample)
loss = criterion(pred, label)
epoch_loss.append(loss.data)
#Backward
loss.backward()
optimizer.step()
epoch_loss = np.asarray(epoch_loss)
end = time.time()
print('Epoch: {}, Loss: {:.4f} +/- {:.4f}, Time: {}'.format(epoch+1, epoch_loss.mean(), epoch_loss.std(), end-start))
return epoch_loss.mean()
好吧,“标准”和“优化器”是我没有像对模型 (MLP) 那样作为函数“训练”的参数传递的对象,但它起作用了。它适用于任何功能还是只是 PyTorch 的东西?
这不是 Pytorch 的东西,这些被称为 global(相对于本地)变量。如果您想掌握 Pytorch,我建议您更熟悉 Python 语言和一般编程。