torch.Linear 体重没有更新
torch.Linear weight doesn't update
#import blah blah
#active funtion
Linear = torch.nn.Linear(6,1)
sig = torch.nn.Sigmoid()
#optimizer
optim = torch.optim.SGD(Linear.parameters() ,lr = 0.001)
#input
#x => (891,6)
#output
y = y.reshape(891,1)
#cost function
loss_f = torch.nn.BCELoss()
for iter in range (10):
for i in range (1000):
optim.zero_grad()
forward = sig(Linear(x)) > 0.5
forward = forward.to(torch.float32)
forward.requires_grad = True
loss = loss_f(forward, y)
loss.backward()
optim.step()
在这段代码中,我想更新 Linear.weight 和 Linear.bias 但它不起作用,,
我认为我的代码不知道什么是权重和偏差,所以我尝试更改
optim = torch.optim.SGD(Linear.parameters() ,lr = 0.001)
到
optim = torch.optim.SGD([Linear.weight, Linear.bias] ,lr = 0.001)
但是还是不行,
// 我想详细解释我的问题,但我的英语水平很低抱歉
BCELoss 定义为
如您所见,输入 x
是概率。但是,您对 sig(Linear(x)) > 0.5
的使用是错误的。此外,sig(Linear(x)) > 0.5
return 一个没有自动梯度的张量,它破坏了计算图。您明确设置了 requires_grad=True
但是,由于图形已损坏,它在反向传播期间无法到达线性层,因此其权重不是 learned/changed。
正确的示例用法:
import torch
import numpy as np
Linear = torch.nn.Linear(6,1)
sig = torch.nn.Sigmoid()
#optimizer
optim = torch.optim.SGD(Linear.parameters() ,lr = 0.001)
# Sample data
x = torch.rand(891,6)
y = torch.rand(891,1)
loss_f = torch.nn.BCELoss()
for iter in range (10):
optim.zero_grad()
output = sig(Linear(x))
loss = loss_f(sig(Linear(x)), y)
loss.backward()
optim.step()
print (Linear.bias.item())
输出:
0.10717090964317322
0.10703673213720322
0.10690263658761978
0.10676861554384232
0.10663467645645142
0.10650081932544708
0.10636703670024872
0.10623333603143692
0.10609971731901169
0.10596618056297302
#import blah blah
#active funtion
Linear = torch.nn.Linear(6,1)
sig = torch.nn.Sigmoid()
#optimizer
optim = torch.optim.SGD(Linear.parameters() ,lr = 0.001)
#input
#x => (891,6)
#output
y = y.reshape(891,1)
#cost function
loss_f = torch.nn.BCELoss()
for iter in range (10):
for i in range (1000):
optim.zero_grad()
forward = sig(Linear(x)) > 0.5
forward = forward.to(torch.float32)
forward.requires_grad = True
loss = loss_f(forward, y)
loss.backward()
optim.step()
在这段代码中,我想更新 Linear.weight 和 Linear.bias 但它不起作用,, 我认为我的代码不知道什么是权重和偏差,所以我尝试更改
optim = torch.optim.SGD(Linear.parameters() ,lr = 0.001)
到
optim = torch.optim.SGD([Linear.weight, Linear.bias] ,lr = 0.001)
但是还是不行,
// 我想详细解释我的问题,但我的英语水平很低抱歉
BCELoss 定义为
如您所见,输入 x
是概率。但是,您对 sig(Linear(x)) > 0.5
的使用是错误的。此外,sig(Linear(x)) > 0.5
return 一个没有自动梯度的张量,它破坏了计算图。您明确设置了 requires_grad=True
但是,由于图形已损坏,它在反向传播期间无法到达线性层,因此其权重不是 learned/changed。
正确的示例用法:
import torch
import numpy as np
Linear = torch.nn.Linear(6,1)
sig = torch.nn.Sigmoid()
#optimizer
optim = torch.optim.SGD(Linear.parameters() ,lr = 0.001)
# Sample data
x = torch.rand(891,6)
y = torch.rand(891,1)
loss_f = torch.nn.BCELoss()
for iter in range (10):
optim.zero_grad()
output = sig(Linear(x))
loss = loss_f(sig(Linear(x)), y)
loss.backward()
optim.step()
print (Linear.bias.item())
输出:
0.10717090964317322
0.10703673213720322
0.10690263658761978
0.10676861554384232
0.10663467645645142
0.10650081932544708
0.10636703670024872
0.10623333603143692
0.10609971731901169
0.10596618056297302