如何将模型中的尺寸从 2D 转换为 1D?
How can I convert the dimension in the model form 2D to 1D?
我是pytorch的初学者。我想将 2d 二进制数组 (17 * 20) 分类为 8 类,我使用交叉熵作为损失函数。我有 512 批量大小。输入是 512 个大小为 (17 * 20) 的批次,最终输出为 512 个大小为 8 的批次。我应用了以下模型,我希望最终输出仅为长度为 8 的列表。如 [512,8]但我得到了那个暗淡的 [512,680,8](我在代码后打印了模型中的尺寸 i git)。我如何从该网络获得 [512,8] 作为最终输出。
def __init__(self, M=1):
super(PPS, self).__init__()
#input layer
self.layer1 = nn.Sequential(
nn.Conv2d(17, 680, kernel_size=1, stride=1, padding=0),
nn.ReLU())
self.drop1 = nn.Sequential(nn.Dropout())
self.batch1 = nn.BatchNorm2d(680)
self.lstm1=nn.Sequential(nn.LSTM(
input_size=20,
hidden_size=16,
num_layers=1,
bidirectional=True,
batch_first= True))
self.gru = nn.Sequential(nn.GRU(
input_size=16*2,
hidden_size=16,
num_layers=2,
bidirectional=True,
batch_first=True))
self.fc1 = nn.Linear(16*2,8)
def forward(self, x):
out = self.layer1(x)
out = self.drop1(out)
out = self.batch1(out)
out = out.squeeze()
out,_ = self.lstm1(out)
out,_ = self.gru(out)
out = self.fc1(out)
return out
cov2d torch.Size([512, 680, 20, 1])
drop torch.Size([512, 680, 20, 1])
batch torch.Size([512, 680, 20])
lstm1 torch.Size([512, 680, 32])
lstm2 torch.Size([512, 680, 32])
linear1 torch.Size([512, 680, 8])
如果您希望输出为 (512, 8)
,那么您必须将最后一个线性层更改为如下所示:
def __init__(self, M=1):
...
self.gru = nn.Sequential(nn.GRU(
input_size=16*2,
hidden_size=16,
num_layers=2,
bidirectional=True,
batch_first=True))
self.fc1 = nn.Linear(680 * 16*2, 8)
def forward (self, x):
...
out, _ = self.gru(out)
out = self.fc1(out.reshape(-1, 680 * 16*2))
return out
目标是将特征数量从 680 * 16 * 2
减少到 8
。您可以(并且可能应该)添加更多最终线性层来为您完成此减少。
我是pytorch的初学者。我想将 2d 二进制数组 (17 * 20) 分类为 8 类,我使用交叉熵作为损失函数。我有 512 批量大小。输入是 512 个大小为 (17 * 20) 的批次,最终输出为 512 个大小为 8 的批次。我应用了以下模型,我希望最终输出仅为长度为 8 的列表。如 [512,8]但我得到了那个暗淡的 [512,680,8](我在代码后打印了模型中的尺寸 i git)。我如何从该网络获得 [512,8] 作为最终输出。
def __init__(self, M=1):
super(PPS, self).__init__()
#input layer
self.layer1 = nn.Sequential(
nn.Conv2d(17, 680, kernel_size=1, stride=1, padding=0),
nn.ReLU())
self.drop1 = nn.Sequential(nn.Dropout())
self.batch1 = nn.BatchNorm2d(680)
self.lstm1=nn.Sequential(nn.LSTM(
input_size=20,
hidden_size=16,
num_layers=1,
bidirectional=True,
batch_first= True))
self.gru = nn.Sequential(nn.GRU(
input_size=16*2,
hidden_size=16,
num_layers=2,
bidirectional=True,
batch_first=True))
self.fc1 = nn.Linear(16*2,8)
def forward(self, x):
out = self.layer1(x)
out = self.drop1(out)
out = self.batch1(out)
out = out.squeeze()
out,_ = self.lstm1(out)
out,_ = self.gru(out)
out = self.fc1(out)
return out
cov2d torch.Size([512, 680, 20, 1])
drop torch.Size([512, 680, 20, 1])
batch torch.Size([512, 680, 20])
lstm1 torch.Size([512, 680, 32])
lstm2 torch.Size([512, 680, 32])
linear1 torch.Size([512, 680, 8])
如果您希望输出为 (512, 8)
,那么您必须将最后一个线性层更改为如下所示:
def __init__(self, M=1):
...
self.gru = nn.Sequential(nn.GRU(
input_size=16*2,
hidden_size=16,
num_layers=2,
bidirectional=True,
batch_first=True))
self.fc1 = nn.Linear(680 * 16*2, 8)
def forward (self, x):
...
out, _ = self.gru(out)
out = self.fc1(out.reshape(-1, 680 * 16*2))
return out
目标是将特征数量从 680 * 16 * 2
减少到 8
。您可以(并且可能应该)添加更多最终线性层来为您完成此减少。