从 pytorch 层获取矩阵维度
Get matrix dimensions from pytorch layers
这是我根据 Pytorch 教程创建的自动编码器:
epochs = 1000
from pylab import plt
plt.style.use('seaborn')
import torch.utils.data as data_utils
import torch
import torchvision
import torch.nn as nn
from torch.autograd import Variable
cuda = torch.cuda.is_available()
FloatTensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor
import numpy as np
import pandas as pd
import datetime as dt
features = torch.tensor(np.array([ [1,2,3],[1,2,3],[100,200,500] ]))
print(features)
batch = 10
data_loader = torch.utils.data.DataLoader(features, batch_size=2, shuffle=False)
encoder = nn.Sequential(nn.Linear(3,batch), nn.Sigmoid())
decoder = nn.Sequential(nn.Linear(batch,3), nn.Sigmoid())
autoencoder = nn.Sequential(encoder, decoder)
optimizer = torch.optim.Adam(params=autoencoder.parameters(), lr=0.001)
encoded_images = []
for i in range(epochs):
for j, images in enumerate(data_loader):
# images = images.view(images.size(0), -1)
images = Variable(images).type(FloatTensor)
optimizer.zero_grad()
reconstructions = autoencoder(images)
loss = torch.dist(images, reconstructions)
loss.backward()
optimizer.step()
# encoded_images.append(encoder(images))
# print(decoder(torch.tensor(np.array([1,2,3])).type(FloatTensor)))
encoded_images = []
for j, images in enumerate(data_loader):
images = images.view(images.size(0), -1)
images = Variable(images).type(FloatTensor)
encoded_images.append(encoder(images))
我可以看到编码图像确实有新创建的 10 维。为了理解幕后进行的矩阵运算,我尝试打印 encoder
和 [=12 的矩阵维数=] 但 shape
在 nn.Sequential
上不可用
如何打印 nn.Sequential
的矩阵维度?
一个nn.Sequential
is not a "layer", but rather a "container"。它可以存储多个层并管理它们的执行(以及一些其他功能)。
在您的情况下,每个 nn.Sequential
都包含线性层和非线性 nn.Sigmoid
激活。要获得 nn.Sequential
中第一层权重的形状,您可以简单地执行以下操作:
encoder[0].weight.shape
这是我根据 Pytorch 教程创建的自动编码器:
epochs = 1000
from pylab import plt
plt.style.use('seaborn')
import torch.utils.data as data_utils
import torch
import torchvision
import torch.nn as nn
from torch.autograd import Variable
cuda = torch.cuda.is_available()
FloatTensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor
import numpy as np
import pandas as pd
import datetime as dt
features = torch.tensor(np.array([ [1,2,3],[1,2,3],[100,200,500] ]))
print(features)
batch = 10
data_loader = torch.utils.data.DataLoader(features, batch_size=2, shuffle=False)
encoder = nn.Sequential(nn.Linear(3,batch), nn.Sigmoid())
decoder = nn.Sequential(nn.Linear(batch,3), nn.Sigmoid())
autoencoder = nn.Sequential(encoder, decoder)
optimizer = torch.optim.Adam(params=autoencoder.parameters(), lr=0.001)
encoded_images = []
for i in range(epochs):
for j, images in enumerate(data_loader):
# images = images.view(images.size(0), -1)
images = Variable(images).type(FloatTensor)
optimizer.zero_grad()
reconstructions = autoencoder(images)
loss = torch.dist(images, reconstructions)
loss.backward()
optimizer.step()
# encoded_images.append(encoder(images))
# print(decoder(torch.tensor(np.array([1,2,3])).type(FloatTensor)))
encoded_images = []
for j, images in enumerate(data_loader):
images = images.view(images.size(0), -1)
images = Variable(images).type(FloatTensor)
encoded_images.append(encoder(images))
我可以看到编码图像确实有新创建的 10 维。为了理解幕后进行的矩阵运算,我尝试打印 encoder
和 [=12 的矩阵维数=] 但 shape
在 nn.Sequential
如何打印 nn.Sequential
的矩阵维度?
一个nn.Sequential
is not a "layer", but rather a "container"。它可以存储多个层并管理它们的执行(以及一些其他功能)。
在您的情况下,每个 nn.Sequential
都包含线性层和非线性 nn.Sigmoid
激活。要获得 nn.Sequential
中第一层权重的形状,您可以简单地执行以下操作:
encoder[0].weight.shape