RuntimeError: size mismatch, m1: [4 x 784], m2: [4 x 784] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:136

RuntimeError: size mismatch, m1: [4 x 784], m2: [4 x 784] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:136

我已经执行了以下代码

   import matplotlib.pyplot as plt
   import torch
   import torch.nn as nn
   import torch.optim as optim
   from torch.autograd import Variable
   from torch.utils import data as t_data
   import torchvision.datasets as datasets
   from torchvision import transforms
   data_transforms = transforms.Compose([transforms.ToTensor()])
  mnist_trainset = datasets.MNIST(root='./data', train=True,    
                           download=True, transform=data_transforms)
batch_size=4
dataloader_mnist_train = t_data.DataLoader(mnist_trainset, 
                                           batch_size=batch_size,
                                           shuffle=True
                                           )

def make_some_noise():
    return torch.rand(batch_size,100)


class generator(nn.Module):

    def __init__(self, inp, out):

        super(generator, self).__init__()

        self.net = nn.Sequential(
                                 nn.Linear(inp,784),
                                 nn.ReLU(inplace=True),
                                 nn.Linear(784,1000),
                                 nn.ReLU(inplace=True),
                                 nn.Linear(1000,800),
                                 nn.ReLU(inplace=True),
                                 nn.Linear(800,out)
                                    )

    def forward(self, x):
        x = self.net(x)
        return x

class discriminator(nn.Module):

    def __init__(self, inp, out):

        super(discriminator, self).__init__()

        self.net = nn.Sequential(
                                 nn.Linear(inp,784),
                                 nn.ReLU(inplace=True),
                                 nn.Linear(784,784),
                                 nn.ReLU(inplace=True),
                                 nn.Linear(784,200),
                                 nn.ReLU(inplace=True),
                                 nn.Linear(200,out),
                                 nn.Sigmoid()
                                    )

    def forward(self, x):
        x = self.net(x)
        return x

def plot_img(array,number=None):
    array = array.detach()
    array = array.reshape(28,28)

    plt.imshow(array,cmap='binary')
    plt.xticks([])
    plt.yticks([])
    if number:
        plt.xlabel(number,fontsize='x-large')
    plt.show()

d_steps = 100
g_steps = 100

gen=generator(4,4)
dis=discriminator(4,4)

criteriond1 = nn.BCELoss()
optimizerd1 = optim.SGD(dis.parameters(), lr=0.001, momentum=0.9)

criteriond2 = nn.BCELoss()
optimizerd2 = optim.SGD(gen.parameters(), lr=0.001, momentum=0.9)

printing_steps = 20

epochs = 5

for epoch in range(epochs):

    print (epoch)

    # training discriminator
    for d_step in range(d_steps):
        dis.zero_grad()

        # training discriminator on real data
        for inp_real,_ in dataloader_mnist_train:
            inp_real_x = inp_real
            break

        inp_real_x = inp_real_x.reshape(batch_size,784)
        dis_real_out = dis(inp_real_x)
        dis_real_loss = criteriond1(dis_real_out,
                              Variable(torch.ones(batch_size,1)))
        dis_real_loss.backward()

        # training discriminator on data produced by generator
        inp_fake_x_gen = make_some_noise()
        #output from generator is generated        
        dis_inp_fake_x = gen(inp_fake_x_gen).detach()
        dis_fake_out = dis(dis_inp_fake_x)
        dis_fake_loss = criteriond1(dis_fake_out,
                                Variable(torch.zeros(batch_size,1)))
        dis_fake_loss.backward()

        optimizerd1.step()



    # training generator
    for g_step in range(g_steps):
        gen.zero_grad()

        #generating data for input for generator
        gen_inp = make_some_noise()

        gen_out = gen(gen_inp)
        dis_out_gen_training = dis(gen_out)
        gen_loss = criteriond2(dis_out_gen_training,
                               Variable(torch.ones(batch_size,1)))
        gen_loss.backward()

        optimizerd2.step()

    if epoch%printing_steps==0:
        plot_img(gen_out[0])
        plot_img(gen_out[1])
        plot_img(gen_out[2])
        plot_img(gen_out[3])
        print("\n\n")

在 运行 代码中,显示以下错误

 File "mygan.py", line 105, in <module>
    dis_real_out = dis(inp_real_x)
    RuntimeError: size mismatch, m1: [4 x 784], m2: [4 x 784] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:136

我该如何解决这个问题?

我从 https://blog.usejournal.com/train-your-first-gan-model-from-scratch-using-pytorch-9b72987fd2c0

获得了代码

该错误提示您输入鉴别器的张量形状不正确。现在让我们试着找出张量的形状是什么,以及期望的形状。

由于上面的重塑操作,张量本身具有 [batch_size x 784] 的形状。另一方面,鉴别器网络期望最后一个维度为 4 的张量。这是因为鉴别器网络中的第一层是 nn.Linear(inp, 784),其中 inp = 4.

线性层 nn.Linear(input_size, output_size),期望输入张量的最终维度等于 input_size,并生成最终维度投影到 output_size 的输出。在这种情况下,它需要一个形状为 [batch_size x 4] 的输入张量,并输出一个形状为 [batch_size x 784].

的张量

现在真正的问题是:您定义的生成器和鉴别器的大小不正确。您似乎已将 300 尺寸从博客 post 更改为 784,我假设这是您的图像尺寸(MNIST 为 28 x 28)。但是,300 不是 输入大小,而是 "hidden state size" -- 该模型使用 300 维向量对输入图像进行编码。

这里应该做的是设置输入大小为784,输出大小为1,因为判别器是二元判断的是假(0)还是真(1) ).对于生成器,输入大小应等于您随机生成的 "input noise",在本例中为 100。输出大小也应该是784,因为它的输出是生成的图像,应该和真实数据大小一样

所以,你只需要对你的代码进行如下改动,应该运行就可以顺利进行了:

gen = generator(100, 784)
dis = discriminator(784, 1)