在 R 中重新训练 mxnet 模型

Retrain mxnet model in R

我用 mxnet 创建了一个神经网络。现在我想在新数据点上迭代地训练这个模型。在我模拟了一个新的数据点后,我想对该模型进行新的梯度下降更新。我不想将模型保存到外部文件并再次加载它。

我写了下面的代码,但是在新的训练步骤之后权重没有改变。我也得到 NaN 作为训练错误。

library(mxnet)
data <- mx.symbol.Variable("data")
fc1 <- mx.symbol.FullyConnected(data, num_hidden = 2, no.bias = TRUE)
lro <- mx.symbol.LinearRegressionOutput(fc1)

# first data observation
train.x = matrix(0, ncol = 3)
train.y = matrix(0, nrow = 2)

# first training step
model = mx.model.FeedForward.create(lro,
  X = train.x, y = train.y, initializer = mx.init.uniform(0.001),
  num.round = 1, array.batch.size = 1, array.layout = "rowmajor",
  learning.rate = 0.1, eval.metric = mx.metric.mae)
print(model$arg.params)

# second data observation
train.x = matrix(0, ncol = 3)
train.x[1] = 1
train.y = matrix(0, nrow = 2)
train.y[1] = -33

# retrain model on new data
# pass on params of old model
model = mx.model.FeedForward.create(symbol = model$symbol,
  arg.params = model$arg.params, aux.params = model$aux.params,
  X = train.x, y = train.y, num.round = 1,
  array.batch.size = 1, array.layout = "rowmajor",
  learning.rate = 0.1, eval.metric = mx.metric.mae)
# weights do not change
print(model$arg.params)

您是否尝试只调用 mx.model.FeedForward.create 一次,然后使用 fit 函数进行增量训练?

我找到了解决办法。第二步训练中的begin.round必须大于第一步训练中的num.round,这样模型才能继续训练。

library(mxnet)
data <- mx.symbol.Variable("data")
fc1 <- mx.symbol.FullyConnected(data, num_hidden = 2, no.bias = TRUE)
lro <- mx.symbol.LinearRegressionOutput(fc1)

# first data observation
train.x = matrix(0, ncol = 3)
train.y = matrix(0, nrow = 2)

# first training step
model = mx.model.FeedForward.create(lro,
  X = train.x, y = train.y, initializer = mx.init.uniform(0.001),
  num.round = 1, array.batch.size = 1, array.layout = "rowmajor",
  learning.rate = 0.1, eval.metric = mx.metric.mae)
print(model$arg.params)

# second data observation
train.x = matrix(0, ncol = 3)
train.x[1] = 1
train.y = matrix(0, nrow = 2)
train.y[1] = -33

# retrain model on new data
# pass on params of old model
model = mx.model.FeedForward.create(symbol = model$symbol,
  arg.params = model$arg.params, aux.params = model$aux.params,
  X = train.x, y = train.y, begin.round = 2, num.round = 3,
  array.batch.size = 1, array.layout = "rowmajor",
  learning.rate = 0.1, eval.metric = mx.metric.mae)

print(model$arg.params)