ValueError: Data cardinality is ambiguous: x sizes: 10000 y sizes: 60000 on mnist dataset

ValueError: Data cardinality is ambiguous: x sizes: 10000 y sizes: 60000 on mnist dataset

我尝试用 tensorflow 训练 mnist 数据集,但我在数据基数方面出错。我试着打印出 trainX、trainY、testX、testY 的形状,一切正常,但它仍然给我这个错误。之前看到有人问过这种问题,就是没看懂

trainX、trainY、testX、testY 的形状

(60000, 28, 28)
(60000,)
(10000, 28, 28)
(10000,)

我的代码

from tensorflow.keras import Sequential
from tensorflow.keras.layers import (
    Conv2D, Activation, MaxPooling2D, Flatten, Dense
)
from tensorflow.keras import backend as K
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import classification_report
from tensorflow.keras.datasets import mnist
from tensorflow.keras import backend as K
import numpy as np


class LeNet:
    @staticmethod
    def build(width, height, depth, classes):
        model = Sequential()
        inputShape = (width, height, depth)

        if K.image_data_format() == "channels_first":
            inputShape = (depth, width, height)

        # first set of CONV
        model.add(Conv2D(20, (5, 5), strides=(1, 1),
                padding="same", input_shape=inputShape))
        model.add(Activation('relu'))
        model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))

        # second set of CONV
        model.add(Conv2D(50, (5, 5), strides=(1, 1), padding="same"))
        model.add(Activation('relu'))
        model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))

        # Fully connect layer
        model.add(Flatten())
        model.add(Dense(500))
        model.add(Activation('relu'))

        # softmax classifier
        model.add(Dense(classes))
        model.add(Activation('softmax'))

        return model

# load mnist dataset
print("[INFO] loading mnist dataset ...")
(trainX, trainY), (testX, testY) = mnist.load_data()

print(trainX.shape)
print(trainY.shape)
print(testX.shape)
print(testY.shape)

# scale to range [0,1]
trainX = testX.astype("float") / 255.0
testX = testX.astype("float") / 255.0

label = LabelBinarizer()
trainY = label.fit_transform(trainY)
testY = label.transform(testY)

# initialize the label name for mnist dataset
labelNames = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]

# initialize the optimizer and model
print("[INFO] compiling model ...")
model = LeNet.build(width=28, height=28, depth=1, classes=10)
model.compile(optimizer='adam', loss="categorical_crossentropy", 
            metrics=["accuracy"])

# train the network
print("[INFO] training the network ...")
H = model.fit(trainX, trainY, validation_data=(testX, testY), 
            batch_size=128, epochs=20, verbose=1)

# save the model
print("[INFO] saving the model ...")
model.save("leNet_MNIST.h5")

# evaluating the network
print("[INFO] evaluating the network ...")
predictions = model.predict(testX, batch_size=128)
print(classification_report(testY.argmax(axis=1), predictions.argmax(axis=1),
    target_names=labelNames))

我收到错误

ValueError                                Traceback (most recent call last)
<ipython-input-5-20120316873b> in <module>()
     71 print("[INFO] training the network ...")
     72 H = model.fit(trainX, trainY, validation_data=(testX, testY), 
---> 73             batch_size=128, epochs=20, verbose=1)
     74 
     75 # save the model

3 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weights, sample_weight_modes, batch_size, epochs, steps, shuffle, **kwargs)
    280             label, ", ".join(str(i.shape[0]) for i in nest.flatten(data)))
    281       msg += "Please provide data which shares the same first dimension."
--> 282       raise ValueError(msg)
    283     num_samples = num_samples.pop()
    284 

ValueError: Data cardinality is ambiguous:
  x sizes: 10000
  y sizes: 60000
Please provide data which shares the same first dimension.

你的代码有点错误。在您缩放数据的那一行,

# scale to range [0,1]
trainX = testX.astype("float") / 255.0
testX = testX.astype("float") / 255.0

您不小心将 testX 用于 trainX 和 testX,导致 trainX.shape 成为 (10000, 28, 28)。只需将其更改为

trainX = trainX.astype("float") / 255.0

它应该可以工作。