Trouble with training simple policy agent. Error: Cannot find a connection between any variable and the result of the loss function y=f(x)

Trouble with training simple policy agent. Error: Cannot find a connection between any variable and the result of the loss function y=f(x)

我正在尝试创建一个策略网络代理来使用 tensorflow.js 和节点来玩井字游戏。

当我 运行 我的训练步骤在游戏结束时,我得到以下

Error: Cannot find a connection between any variable and the result of the loss function y=f(x). Please make sure the operations that use variables are inside the function f passed to minimize().

class NNModel {
  constructor(learning_rate = 0.01){
    this.learning_rate = learning_rate
    this.model = this.createModel()
  }

  train(actions, rewards, boards) {

    const optimizer = tf.train.rmsprop(this.learning_rate, 0.99)

    optimizer.minimize(() => {
      const oneHotLabels = tf.oneHot(actions, BOARD_SIZE).dataSync()
      const logits = this.model.predict(tf.tensor(boards)).dataSync()
      const crossEntropies = tf.losses.softmaxCrossEntropy(oneHotLabels, logits).asScalar()
      const loss = tf.tensor(rewards).mul(crossEntropies)
      return loss
    })
  }

  createModel() {
    const model = tf.sequential()

    model.add(
      tf.layers.dense({
        units: BOARD_SIZE * 3 * 9,
        activation: 'relu',
        inputShape: [BOARD_SIZE * 3]
      })
    )

    model.add(
      tf.layers.dense({
        units: BOARD_SIZE,
      })  
    )

    return model
  }
}

在我的 SimplePolicyAgent 中,作为每个移动步骤的一部分,我将棋盘状态保存到日志中,使用模型选择移动并将其保存到日志中。

在游戏结束时,我得到结果并创建一个与移动日志长度相同的列表,并根据游戏结果获得奖励。

然后我用动作、奖励和看板调用训练函数。

我希望这一步能够更新模型权重,使模型更有可能为给定的棋盘状态选择获胜棋步。

我正在尝试模拟以下 python 实现

#loss
cross_entropies tf.losses.softmax_cross_entropy(one_hot_labels=tf.one_hot(actions, 7), logits=Ylogits)
loss = tf.reduce_sum(rewards * cross_entropies)

#training op
optimizer = tf.train.RMSPropOptimizer(learning_rate=0.001, decay=0.99)
train_op = optimizer.minimize(loss)

感谢阅读我的问题

  train(actions, rewards, boards) {
    const optimizer = tf.train.rmsprop(this.learning_rate, 0.99)
    return optimizer.minimize(() => {
      const oneHotLabels = tf.oneHot(actions, BOARD_SIZE)
      const logits = this.model.predict(tf.tensor(boards))
      const crossEntropies = tf.losses.softmaxCrossEntropy(oneHotLabels, logits)
      const loss = tf.sum(tf.tensor(rewards).mul(crossEntropies)).asScalar()
      return loss
    })
  }

此代码现在运行时没有错误,我错误地将 .dataSync() 添加到 oneHotLabelslogits 中,这从最小化函数中隐藏了变量。