Pytorch nn.Linear 相同输入的不同输出

Pytorch nn.Linear different output for same input

出于学习目的,我正在尝试使用 pytorch 构建一个简单的感知器,它不应该被训练,而只是给出设定权重的输出。这是代码:

import torch.nn
from torch import tensor

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = torch.nn.Linear(3,1)
        self.relu = torch.nn.ReLU()
        # force weights to equal one
        with torch.no_grad():
            self.fc1.weight = torch.nn.Parameter(torch.ones_like(self.fc1.weight))

    def forward(self, x):
        x = self.fc1(x)
        output = self.relu(x)
        return output

net = Net()
test_tensor = tensor([1, 1, 1])
print(net(test_tensor.float()).item())

我希望这个单层神经网络输出 3。那是 粗略地 (!) 每次执行的输出,但范围从 2.5 到 3.5。随机性在何处进入模型?

Q: Where does randomness enter the model?

它来自 bias 初始化。如您所见 herebias 未按预期初始化为零。

你可以这样解决:

import torch
from torch import nn

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = torch.nn.Linear(3,1)
        self.relu = torch.nn.ReLU()
        # force weights to equal one
        with torch.no_grad():
            torch.nn.init.ones_(self.fc1.weight)
            torch.nn.init.zeros_(self.fc1.bias)

    def forward(self, x):
        x = self.fc1(x)
        output = self.relu(x)
        return output

x = torch.tensor([1., 1., 1.])
Net()(x)
# >>> tensor([3.], grad_fn=<ReluBackward0>)