ValueError: Missing data for input "input_2". You passed a data dictionary with keys ['y', 'x']. Expected the following keys: ['input_2']

ValueError: Missing data for input "input_2". You passed a data dictionary with keys ['y', 'x']. Expected the following keys: ['input_2']

按照前面的代码 我正在评估联邦学习模型,我遇到了几个问题。 这是评估代码

central_test = test.create_tf_dataset_from_all_clients()
test_data = central_test.map(reshape_data)

# function that accepts a server state, and uses 
#Keras to evaluate on the test dataset.
def evaluate(server_state):
  keras_model = create_keras_model()
  keras_model.compile(
      loss=tf.keras.losses.SparseCategoricalCrossentropy(),
      metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]  
  )
  keras_model.set_weights(server_state)
  keras_model.evaluate(central_test)

server_state = federated_algorithm.initialize()
evaluate(server_state)

这是错误信息

ValueError: Missing data for input "input_2". You passed a data dictionary with keys ['y', 'x']. Expected the following keys: ['input_2']

那么这里会出现什么问题呢? create_tf_dataset_from_all_clients 方法的使用是否正确?因为 - 如 tutorial 中所写 - 用于 创建集中评估数据集 。为什么我们需要使用集中式数据集?

test 数据集在评估期间具有不同的格式。尝试:

test_data = test.create_tf_dataset_from_all_clients().map(reshape_data).batch(2)
test_data = test_data.map(lambda x: (x['x'], x['y']))

def evaluate(server_state):
  keras_model = create_keras_model()
  keras_model.compile(
      loss=tf.keras.losses.SparseCategoricalCrossentropy(),
      metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]  
  )
  keras_model.set_weights(server_state)
  keras_model.evaluate(test_data)

server_state = federated_algorithm.initialize()
evaluate(server_state)