MxNet with R:简单的 XOR 神经网络不是学习
MxNet with R: Simple XOR Neural Network is not learning
我想尝试使用 MxNet 库并构建一个学习 XOR 函数的简单神经网络。我面临的问题是模型没有学习。
这是完整的脚本:
library(mxnet)
train = matrix(c(0,0,0,
0,1,1,
1,0,1,
1,1,0),
nrow=4,
ncol=3,
byrow=TRUE)
train.x = train[,-3]
train.y = train[,3]
data <- mx.symbol.Variable("data")
fc1 <- mx.symbol.FullyConnected(data, name="fc1", num_hidden=2)
act1 <- mx.symbol.Activation(fc1, name="relu1", act_type="relu")
fc2 <- mx.symbol.FullyConnected(act1, name="fc2", num_hidden=3)
act2 <- mx.symbol.Activation(fc2, name="relu2", act_type="relu")
fc3 <- mx.symbol.FullyConnected(act2, name="fc3", num_hidden=1)
softmax <- mx.symbol.SoftmaxOutput(fc3, name="sm")
mx.set.seed(0)
model <- mx.model.FeedForward.create(
softmax,
X = t(train.x),
y = train.y,
num.round = 10,
array.layout = "columnmajor",
learning.rate = 0.01,
momentum = 0.4,
eval.metric = mx.metric.accuracy,
epoch.end.callback = mx.callback.log.train.metric(100))
predict(model,train.x,array.layout="rowmajor")
产生了这个输出:
Start training with 1 devices
[1] Train-accuracy=NaN
[2] Train-accuracy=0.5
[3] Train-accuracy=0.5
[4] Train-accuracy=0.5
[5] Train-accuracy=0.5
[6] Train-accuracy=0.5
[7] Train-accuracy=0.5
[8] Train-accuracy=0.5
[9] Train-accuracy=0.5
[10] Train-accuracy=0.5
> predict(model,train.x,array.layout="rowmajor")
[,1] [,2] [,3] [,4]
[1,] 1 1 1 1
我应该如何使用 mxnet 来运行这个示例?
此致,
瓦卡
好的,我尝试了更多,现在我有了一个在 R 中与 mxnet 进行 XOR 的工作示例。复杂的部分不是 mxnet API,而是神经网络的使用。
所以这是有效的 R 代码:
library(mxnet)
train = matrix(c(0,0,0,
0,1,1,
1,0,1,
1,1,0),
nrow=4,
ncol=3,
byrow=TRUE)
train.x = t(train[,-3])
train.y = t(train[,3])
data <- mx.symbol.Variable("data")
act0 <- mx.symbol.Activation(data, name="relu1", act_type="relu")
fc1 <- mx.symbol.FullyConnected(act0, name="fc1", num_hidden=2)
act1 <- mx.symbol.Activation(fc1, name="relu2", act_type="tanh")
fc2 <- mx.symbol.FullyConnected(act1, name="fc2", num_hidden=3)
act2 <- mx.symbol.Activation(fc2, name="relu3", act_type="relu")
fc3 <- mx.symbol.FullyConnected(act2, name="fc3", num_hidden=1)
act3 <- mx.symbol.Activation(fc3, name="relu4", act_type="relu")
softmax <- mx.symbol.LinearRegressionOutput(act3, name="sm")
mx.set.seed(0)
model <- mx.model.FeedForward.create(
softmax,
X = train.x,
y = train.y,
num.round = 10000,
array.layout = "columnmajor",
learning.rate = 10^-2,
momentum = 0.95,
eval.metric = mx.metric.rmse,
epoch.end.callback = mx.callback.log.train.metric(10),
lr_scheduler=mx.lr_scheduler.FactorScheduler(1000,factor=0.9),
initializer=mx.init.uniform(0.5)
)
predict(model,train.x,array.layout="columnmajor")
初始代码有一些差异:
我通过在数据和第一层之间放置另一个激活层来更改神经网络的布局。我将其解释为在数据和输入层之间放置权重(正确吗?)
我把隐藏层(有3个神经元)的激活函数改成了tanh,因为我猜异或需要负权重
我将 SoftmaxOutput 更改为 LinearRegressionOutput 以优化平方损失
微调学习率和动量
最重要的是:我为权重添加了统一初始化器。我猜默认模式是将权重设置为零。使用随机初始权重时,学习速度确实加快了。
输出:
Start training with 1 devices
[1] Train-rmse=NaN
[2] Train-rmse=0.706823888574888
[3] Train-rmse=0.705537411582449
[4] Train-rmse=0.701298592443344
[5] Train-rmse=0.691897326795625
...
[9999] Train-rmse=1.07453801496744e-07
[10000] Train-rmse=1.07453801496744e-07
> predict(model,train.x,array.layout="columnmajor")
[,1] [,2] [,3] [,4]
[1,] 0 0.9999998 1 0
通常激活层不会在输入后立即开始,因为它应该在第一层的计算完成后被激活。
你仍然可以用你的旧代码实现模仿 XOR 功能,但它需要一些调整:
你是对的,你需要初始化权重。深度学习社区中有一个很大的讨论,即初始权重是最好的,但从我的实践来看,Xavier 权重效果很好
如果要使用softmax,需要将最后一个隐藏层的单位数量改为2,因为你有2个classes:0和1
完成这两件事后 + 一些小的优化,比如删除矩阵的转置,代码如下:
library(mxnet)
train = matrix(c(0,0,0,
0,1,1,
1,0,1,
1,1,0),
nrow=4,
ncol=3,
byrow=TRUE)
train.x = train[,-3]
train.y = train[,3]
data <- mx.symbol.Variable("data")
fc1 <- mx.symbol.FullyConnected(data, name="fc1", num_hidden=2)
act1 <- mx.symbol.Activation(fc1, name="relu1", act_type="relu")
fc2 <- mx.symbol.FullyConnected(act1, name="fc2", num_hidden=3)
act2 <- mx.symbol.Activation(fc2, name="relu2", act_type="relu")
fc3 <- mx.symbol.FullyConnected(act2, name="fc3", num_hidden=2)
softmax <- mx.symbol.Softmax(fc3, name="sm")
mx.set.seed(0)
model <- mx.model.FeedForward.create(
softmax,
X = train.x,
y = train.y,
num.round = 50,
array.layout = "rowmajor",
learning.rate = 0.1,
momentum = 0.99,
eval.metric = mx.metric.accuracy,
initializer = mx.init.Xavier(rnd_type = "uniform", factor_type = "avg", magnitude = 3),
epoch.end.callback = mx.callback.log.train.metric(100))
predict(model,train.x,array.layout="rowmajor")
我们得到以下结果:
Start training with 1 devices
[1] Train-accuracy=NaN
[2] Train-accuracy=0.75
[3] Train-accuracy=0.5
[4] Train-accuracy=0.5
[5] Train-accuracy=0.5
[6] Train-accuracy=0.5
[7] Train-accuracy=0.5
[8] Train-accuracy=0.5
[9] Train-accuracy=0.5
[10] Train-accuracy=0.75
[11] Train-accuracy=0.75
[12] Train-accuracy=0.75
[13] Train-accuracy=0.75
[14] Train-accuracy=0.75
[15] Train-accuracy=0.75
[16] Train-accuracy=0.75
[17] Train-accuracy=0.75
[18] Train-accuracy=0.75
[19] Train-accuracy=0.75
[20] Train-accuracy=0.75
[21] Train-accuracy=0.75
[22] Train-accuracy=0.5
[23] Train-accuracy=0.5
[24] Train-accuracy=0.5
[25] Train-accuracy=0.75
[26] Train-accuracy=0.75
[27] Train-accuracy=0.75
[28] Train-accuracy=0.75
[29] Train-accuracy=0.75
[30] Train-accuracy=0.75
[31] Train-accuracy=0.75
[32] Train-accuracy=0.75
[33] Train-accuracy=0.75
[34] Train-accuracy=0.75
[35] Train-accuracy=0.75
[36] Train-accuracy=0.75
[37] Train-accuracy=0.75
[38] Train-accuracy=0.75
[39] Train-accuracy=1
[40] Train-accuracy=1
[41] Train-accuracy=1
[42] Train-accuracy=1
[43] Train-accuracy=1
[44] Train-accuracy=1
[45] Train-accuracy=1
[46] Train-accuracy=1
[47] Train-accuracy=1
[48] Train-accuracy=1
[49] Train-accuracy=1
[50] Train-accuracy=1
>
> predict(model,train.x,array.layout="rowmajor")
[,1] [,2] [,3] [,4]
[1,] 0.9107883 2.618128e-06 6.384078e-07 0.9998743534
[2,] 0.0892117 9.999974e-01 9.999994e-01 0.0001256234
'''
softmax 的输出被解释为 "a probability of belonging to a class" - 它不是进行常规数学运算后得到的“0”或“1”值。答案含义如下:
- 如果“0 和 0”:class“0”的概率 = 0.9107883 和 class“1”的概率 = 0.0892117,这意味着它是 0
- 如果“0和1”:class“0”的概率= 2.618128e-06和class“1”的概率= 9.999974e-01,这意味着它是1(概率1 高得多)
- 如果“1 和 0”:class“0”的概率 = 6.384078e-07 和 class“1”的概率 = 9.999994e-01(1 的概率要高得多)
- 如果“1 和 1”:class“0”的概率 = 0.9998743534,class“1”的概率 = 0.0001256234,即为 0。
我想尝试使用 MxNet 库并构建一个学习 XOR 函数的简单神经网络。我面临的问题是模型没有学习。
这是完整的脚本:
library(mxnet)
train = matrix(c(0,0,0,
0,1,1,
1,0,1,
1,1,0),
nrow=4,
ncol=3,
byrow=TRUE)
train.x = train[,-3]
train.y = train[,3]
data <- mx.symbol.Variable("data")
fc1 <- mx.symbol.FullyConnected(data, name="fc1", num_hidden=2)
act1 <- mx.symbol.Activation(fc1, name="relu1", act_type="relu")
fc2 <- mx.symbol.FullyConnected(act1, name="fc2", num_hidden=3)
act2 <- mx.symbol.Activation(fc2, name="relu2", act_type="relu")
fc3 <- mx.symbol.FullyConnected(act2, name="fc3", num_hidden=1)
softmax <- mx.symbol.SoftmaxOutput(fc3, name="sm")
mx.set.seed(0)
model <- mx.model.FeedForward.create(
softmax,
X = t(train.x),
y = train.y,
num.round = 10,
array.layout = "columnmajor",
learning.rate = 0.01,
momentum = 0.4,
eval.metric = mx.metric.accuracy,
epoch.end.callback = mx.callback.log.train.metric(100))
predict(model,train.x,array.layout="rowmajor")
产生了这个输出:
Start training with 1 devices
[1] Train-accuracy=NaN
[2] Train-accuracy=0.5
[3] Train-accuracy=0.5
[4] Train-accuracy=0.5
[5] Train-accuracy=0.5
[6] Train-accuracy=0.5
[7] Train-accuracy=0.5
[8] Train-accuracy=0.5
[9] Train-accuracy=0.5
[10] Train-accuracy=0.5
> predict(model,train.x,array.layout="rowmajor")
[,1] [,2] [,3] [,4]
[1,] 1 1 1 1
我应该如何使用 mxnet 来运行这个示例?
此致, 瓦卡
好的,我尝试了更多,现在我有了一个在 R 中与 mxnet 进行 XOR 的工作示例。复杂的部分不是 mxnet API,而是神经网络的使用。
所以这是有效的 R 代码:
library(mxnet)
train = matrix(c(0,0,0,
0,1,1,
1,0,1,
1,1,0),
nrow=4,
ncol=3,
byrow=TRUE)
train.x = t(train[,-3])
train.y = t(train[,3])
data <- mx.symbol.Variable("data")
act0 <- mx.symbol.Activation(data, name="relu1", act_type="relu")
fc1 <- mx.symbol.FullyConnected(act0, name="fc1", num_hidden=2)
act1 <- mx.symbol.Activation(fc1, name="relu2", act_type="tanh")
fc2 <- mx.symbol.FullyConnected(act1, name="fc2", num_hidden=3)
act2 <- mx.symbol.Activation(fc2, name="relu3", act_type="relu")
fc3 <- mx.symbol.FullyConnected(act2, name="fc3", num_hidden=1)
act3 <- mx.symbol.Activation(fc3, name="relu4", act_type="relu")
softmax <- mx.symbol.LinearRegressionOutput(act3, name="sm")
mx.set.seed(0)
model <- mx.model.FeedForward.create(
softmax,
X = train.x,
y = train.y,
num.round = 10000,
array.layout = "columnmajor",
learning.rate = 10^-2,
momentum = 0.95,
eval.metric = mx.metric.rmse,
epoch.end.callback = mx.callback.log.train.metric(10),
lr_scheduler=mx.lr_scheduler.FactorScheduler(1000,factor=0.9),
initializer=mx.init.uniform(0.5)
)
predict(model,train.x,array.layout="columnmajor")
初始代码有一些差异:
我通过在数据和第一层之间放置另一个激活层来更改神经网络的布局。我将其解释为在数据和输入层之间放置权重(正确吗?)
我把隐藏层(有3个神经元)的激活函数改成了tanh,因为我猜异或需要负权重
我将 SoftmaxOutput 更改为 LinearRegressionOutput 以优化平方损失
微调学习率和动量
最重要的是:我为权重添加了统一初始化器。我猜默认模式是将权重设置为零。使用随机初始权重时,学习速度确实加快了。
输出:
Start training with 1 devices
[1] Train-rmse=NaN
[2] Train-rmse=0.706823888574888
[3] Train-rmse=0.705537411582449
[4] Train-rmse=0.701298592443344
[5] Train-rmse=0.691897326795625
...
[9999] Train-rmse=1.07453801496744e-07
[10000] Train-rmse=1.07453801496744e-07
> predict(model,train.x,array.layout="columnmajor")
[,1] [,2] [,3] [,4]
[1,] 0 0.9999998 1 0
通常激活层不会在输入后立即开始,因为它应该在第一层的计算完成后被激活。 你仍然可以用你的旧代码实现模仿 XOR 功能,但它需要一些调整:
你是对的,你需要初始化权重。深度学习社区中有一个很大的讨论,即初始权重是最好的,但从我的实践来看,Xavier 权重效果很好
如果要使用softmax,需要将最后一个隐藏层的单位数量改为2,因为你有2个classes:0和1
完成这两件事后 + 一些小的优化,比如删除矩阵的转置,代码如下:
library(mxnet)
train = matrix(c(0,0,0,
0,1,1,
1,0,1,
1,1,0),
nrow=4,
ncol=3,
byrow=TRUE)
train.x = train[,-3]
train.y = train[,3]
data <- mx.symbol.Variable("data")
fc1 <- mx.symbol.FullyConnected(data, name="fc1", num_hidden=2)
act1 <- mx.symbol.Activation(fc1, name="relu1", act_type="relu")
fc2 <- mx.symbol.FullyConnected(act1, name="fc2", num_hidden=3)
act2 <- mx.symbol.Activation(fc2, name="relu2", act_type="relu")
fc3 <- mx.symbol.FullyConnected(act2, name="fc3", num_hidden=2)
softmax <- mx.symbol.Softmax(fc3, name="sm")
mx.set.seed(0)
model <- mx.model.FeedForward.create(
softmax,
X = train.x,
y = train.y,
num.round = 50,
array.layout = "rowmajor",
learning.rate = 0.1,
momentum = 0.99,
eval.metric = mx.metric.accuracy,
initializer = mx.init.Xavier(rnd_type = "uniform", factor_type = "avg", magnitude = 3),
epoch.end.callback = mx.callback.log.train.metric(100))
predict(model,train.x,array.layout="rowmajor")
我们得到以下结果:
Start training with 1 devices
[1] Train-accuracy=NaN
[2] Train-accuracy=0.75
[3] Train-accuracy=0.5
[4] Train-accuracy=0.5
[5] Train-accuracy=0.5
[6] Train-accuracy=0.5
[7] Train-accuracy=0.5
[8] Train-accuracy=0.5
[9] Train-accuracy=0.5
[10] Train-accuracy=0.75
[11] Train-accuracy=0.75
[12] Train-accuracy=0.75
[13] Train-accuracy=0.75
[14] Train-accuracy=0.75
[15] Train-accuracy=0.75
[16] Train-accuracy=0.75
[17] Train-accuracy=0.75
[18] Train-accuracy=0.75
[19] Train-accuracy=0.75
[20] Train-accuracy=0.75
[21] Train-accuracy=0.75
[22] Train-accuracy=0.5
[23] Train-accuracy=0.5
[24] Train-accuracy=0.5
[25] Train-accuracy=0.75
[26] Train-accuracy=0.75
[27] Train-accuracy=0.75
[28] Train-accuracy=0.75
[29] Train-accuracy=0.75
[30] Train-accuracy=0.75
[31] Train-accuracy=0.75
[32] Train-accuracy=0.75
[33] Train-accuracy=0.75
[34] Train-accuracy=0.75
[35] Train-accuracy=0.75
[36] Train-accuracy=0.75
[37] Train-accuracy=0.75
[38] Train-accuracy=0.75
[39] Train-accuracy=1
[40] Train-accuracy=1
[41] Train-accuracy=1
[42] Train-accuracy=1
[43] Train-accuracy=1
[44] Train-accuracy=1
[45] Train-accuracy=1
[46] Train-accuracy=1
[47] Train-accuracy=1
[48] Train-accuracy=1
[49] Train-accuracy=1
[50] Train-accuracy=1
>
> predict(model,train.x,array.layout="rowmajor")
[,1] [,2] [,3] [,4]
[1,] 0.9107883 2.618128e-06 6.384078e-07 0.9998743534
[2,] 0.0892117 9.999974e-01 9.999994e-01 0.0001256234
'''
softmax 的输出被解释为 "a probability of belonging to a class" - 它不是进行常规数学运算后得到的“0”或“1”值。答案含义如下:
- 如果“0 和 0”:class“0”的概率 = 0.9107883 和 class“1”的概率 = 0.0892117,这意味着它是 0
- 如果“0和1”:class“0”的概率= 2.618128e-06和class“1”的概率= 9.999974e-01,这意味着它是1(概率1 高得多)
- 如果“1 和 0”:class“0”的概率 = 6.384078e-07 和 class“1”的概率 = 9.999994e-01(1 的概率要高得多)
- 如果“1 和 1”:class“0”的概率 = 0.9998743534,class“1”的概率 = 0.0001256234,即为 0。