Pytorch 中具有 3 个输入(数值)的简单神经网络

Simple Neural Network in Pytorch with 3 inputs (Numerical Values)

很难设置神经网络,大多数示例都是图像。我的问题有 3 个输入,每个输入的大小为 N X M,其中 N 是样本,M 是特征。我有一个带有 1 x N 二进制目标 (0,1) 的单独文件 (CSV)。

我尝试配置的网络应该有两个隐藏层,分别有 100 和 50 个神经元。 Sigmoid 激活函数和交叉熵来检查性能。结果应该只是一个单一的概率输出。

请帮忙?

编辑:

import torch
import torch.nn as nn
import torch.optim as optim
import torch.autograd as autograd 
import torch.nn.functional as F
#from torch.autograd import Variable
import pandas as pd

# Import Data
Input1 = pd.read_csv(r'...')
Input2 = pd.read_csv(r'...')
Input3 = pd.read_csv(r'...')
Target = pd.read_csv(r'...' )

# Convert to Tensor
Input1_tensor = torch.tensor(Input1.to_numpy()).float()
Input2_tensor = torch.tensor(Input2.to_numpy()).float()
Input3_tensor = torch.tensor(Input3.to_numpy()).float()
Target_tensor = torch.tensor(Target.to_numpy()).float()

# Transpose to have signal as columns instead of rows
input1 = Input1_tensor
input2 = Input2_tensor
input3 = Input3_tensor
y = Target_tensor

# Define the model
class Net(nn.Module):
    def __init__(self, num_inputs, hidden1_size, hidden2_size, num_classes):
        # Initialize super class
        super(Net, self).__init__()
        #self.criterion = nn.CrossEntropyLoss()
        
        # Add hidden layer 
        self.layer1 = nn.Linear(num_inputs,hidden1_size)
        # Activation
        self.sigmoid = torch.nn.Sigmoid()
        # Add output layer
        self.layer2 = nn.Linear(hidden1_size,hidden2_size)
        # Activation
        self.sigmoid2 = torch.nn.Sigmoid()
        self.layer3 = nn.Linear(hidden2_size, num_classes)

    def forward(self, x1, x2, x3):
        # implement the forward pass
     
        in1 = self.layer1(x1)
        in2 = self.layer1(x2)
        in3 = self.layer1(x3)
                      
        xyz = torch.cat((in1,in2,in3),1)

        return xyz

# Define loss function
loss_function = nn.CrossEntropyLoss()

# Define optimizer
optimizer = optim.SGD(model.parameters(), lr=1e-4)

for t in range(num_epochs):

    # Forward pass: Compute predicted y by passing x to the model
    y_pred = model(input1, input2, input3)

    # Compute and print loss
    loss = loss_function(y_pred, y)
    print(t, loss.item())

    # Zero gradients, perform a backward pass, and update the weights.
    optimizer.zero_grad()

    # Calculate gradient using backward pass
    loss.backward()

    # Update model parameters (weights)
    optimizer.step()

这里我收到错误消息“ RuntimeError:预期为 0D 或 1D 目标张量,不支持多目标

行“loss = loss_function(y_pred, y)”

其中 y_pred 为 [20000,375],y 为 [20000,1]

您可以参考 pytorch,一个用于深度学习和神经网络的 python 库。

您可以使用下面定义网络的代码:

from torch import nn
import torch.nn.functional as F

def network(nn.Module):
    def __init__(self, M):
        # M is the dimension of input feature
        super(network, self).__init__()
        self.layer1 = nn.Linear(M, 100)
        self.layer2 = nn.Linear(100, 50)
        self.out = nn.Linear(50,1)

    def forward(self,x):
        return F.sigmoid(self.out(self.layer2(self.layer1(x))))

----------


然后可以参考pytorch documentation完成剩下的训练代码

编辑:

至于RuntimeError,可以通过y.squeeze()压缩目标张量。这将删除张量中的冗余维度,例如[20000,1] -> [20000]