Pytorch nll_loss 在训练循环中返回一个恒定的损失

Pytorch nll_loss returning a constant loss during training loop

我有一个图像 binary 分类问题,我想对图像进行天气分类 antbee。我已经抓取了图像并进行了所有清理、重塑、转换为灰度。图像大小为 200x200 一个通道灰度。在跳转到 Conv Nets..

之前,我首先想使用 Feed Forwad NN 解决这个问题

我在训练循环中遇到的问题我得到一个常数loss我正在使用Adam优化器,F.log_softmax用于网络的最后一层以及nll_loss 功能。到目前为止,我的代码如下所示:

FF - 网络

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(in_features , 64)
        self.fc2 = nn.Linear(64, 64)
        self.fc3 = nn.Linear(64, 32)
        self.fc4 = nn.Linear(32, 2)
        
    def forward(self, X):
        X = F.relu(self.fc1(X))
        X = F.relu(self.fc2(X))
        X = F.relu(self.fc3(X))
        X = F.log_softmax(self.fc4(X), dim=1)
        return X
    
net = Net()

训练循环。

optimizer = torch.optim.Adam(net.parameters(), lr=0.001)
EPOCHS = 10
BATCH_SIZE = 5
for epoch in range(EPOCHS):
    print(f'Epochs: {epoch+1}/{EPOCHS}')
    for i in range(0, len(y_train), BATCH_SIZE):
        X_batch = X_train[i: i+BATCH_SIZE].view(-1,200 * 200)
        y_batch = y_train[i: i+BATCH_SIZE].type(torch.LongTensor)
        
        net.zero_grad() ## or you can say optimizer.zero_grad()
        
        outputs = net(X_batch)
        loss = F.nll_loss(outputs, y_batch)
        loss.backward()
        optimizer.step()
    print("Loss", loss)

I am suspecting that the problem maybe with my batching and the loss function. I will appreciate any help. Note: The images are gray-scale images of shape (200, 200).

我一直在等待答案,但我什至无法发表评论。我自己想出了解决方案,也许这可以帮助将来的人。

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(200 * 200 , 64) # 200 * 200 are in_features, which is an image of shape 200*200 (gray image)
        self.fc2 = nn.Linear(64, 64)
        self.fc3 = nn.Linear(64, 32)
        self.fc4 = nn.Linear(32, 2)
        
    def forward(self, X):
        X = F.relu(self.fc1(X))
        X = F.relu(self.fc2(X))
        X = F.relu(self.fc3(X))
        X = self.fc4(X) # I removed the activation function here, 
        return X
    
net = Net()

# I changed the loss function to CrossEntropyLoss() since i didn't apply activation function on the last layer

loss_function = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=0.001)

EPOCHS = 10
BATCH_SIZE = 5
for epoch in range(EPOCHS):
    print(f'Epochs: {epoch+1}/{EPOCHS}')
    for i in range(0, len(y_train), BATCH_SIZE):
        X_batch = X_train[i: i+BATCH_SIZE].view(-1, 200 * 200)
        y_batch = y_train[i: i+BATCH_SIZE].type(torch.LongTensor)
        
        net.zero_grad() ## or you can say optimizer.zero_grad()
        
        outputs = net(X_batch)
        loss = loss_function(outputs, y_batch)
        loss.backward()
        optimizer.step()
    print("Loss", loss)