RuntimeError: mat1 and mat2 shapes cannot be multiplied (4x73034 and 200x120)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (4x73034 and 200x120)
为皮肤检测数据集构建神经网络层,并在此处出错。我知道我犯了一些错误,但无法弄清楚。获取图像大小 224*224 和通道 3 后出现错误:RuntimeError: mat1 和 mat2 形状无法相乘(4x73034 和 200x120)
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 16, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(16, 26, 5)
self.fc1 = nn.Linear(8 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 86)
self.fc3 = nn.Linear(86, 2)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x,1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net().to(device)
print(net)
这些是图层和网络模块
<ipython-input-41-8c9bafb31c44> in forward(self, x)
16 x = self.pool(F.relu(self.conv2(x)))
17 x = torch.flatten(x,1)
---> 18 x = F.relu(self.fc1(x))
19 x = F.relu(self.fc2(x))
20 x = self.fc3(x)
谁能帮我解决这个问题。
torch.flatten 的输出与 self.fc1
的输入不匹配
打印torch.flatten
输出的形状
x = torch.flatten(x,1)
print(x.size())
并随后更新 self.fc1
的定义
self.fc1 = nn.Linear(8 * 5 * 5, 120)
正如Anant所说,你需要匹配扁平化的conv2维度(73034)作为fc1层的输入维度。
self.fc1 = nn.Linear(73034, 120)
计算每个conv层输出的公式:
[(height or width) - kernel size + 2*padding] / stride + 1
对于以下内容,我将使用尺寸(通道、高度、宽度)
输入 (3,224,224) -> conv1 -> (16,220,220) -> pool -> (16,110,110) -> conv2 -> (26,106,106) -> pool -> (26,53,53) -> flatten -> (73034)
你的batch size好像是4,指的是(4x73034)中的“4”。如果打印 conv1 或 conv2 层的输出尺寸,格式将为 (Batch, Channels, Height, Width)。
为皮肤检测数据集构建神经网络层,并在此处出错。我知道我犯了一些错误,但无法弄清楚。获取图像大小 224*224 和通道 3 后出现错误:RuntimeError: mat1 和 mat2 形状无法相乘(4x73034 和 200x120)
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 16, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(16, 26, 5)
self.fc1 = nn.Linear(8 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 86)
self.fc3 = nn.Linear(86, 2)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x,1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net().to(device)
print(net)
这些是图层和网络模块
<ipython-input-41-8c9bafb31c44> in forward(self, x)
16 x = self.pool(F.relu(self.conv2(x)))
17 x = torch.flatten(x,1)
---> 18 x = F.relu(self.fc1(x))
19 x = F.relu(self.fc2(x))
20 x = self.fc3(x)
谁能帮我解决这个问题。
torch.flatten 的输出与 self.fc1
的输入不匹配打印torch.flatten
x = torch.flatten(x,1)
print(x.size())
并随后更新 self.fc1
self.fc1 = nn.Linear(8 * 5 * 5, 120)
正如Anant所说,你需要匹配扁平化的conv2维度(73034)作为fc1层的输入维度。
self.fc1 = nn.Linear(73034, 120)
计算每个conv层输出的公式:
[(height or width) - kernel size + 2*padding] / stride + 1
对于以下内容,我将使用尺寸(通道、高度、宽度) 输入 (3,224,224) -> conv1 -> (16,220,220) -> pool -> (16,110,110) -> conv2 -> (26,106,106) -> pool -> (26,53,53) -> flatten -> (73034)
你的batch size好像是4,指的是(4x73034)中的“4”。如果打印 conv1 或 conv2 层的输出尺寸,格式将为 (Batch, Channels, Height, Width)。