R - 神经网络 - 传统的反向传播似乎很奇怪
R - neuralnet - Traditional backprop seems strange
我正在尝试 neuralnet
包中的不同算法,但是当我尝试传统的 backprop
算法时,结果非常 strange/disappointing。几乎所有的计算结果都是~.33???我假设我一定是错误地使用了算法,就好像我 运行 它与默认 rprop+
它确实区分样本一样。当然,正常的反向传播并没有那么糟糕,特别是如果它能够如此迅速地收敛到提供的阈值。
library(neuralnet)
data(infert)
set.seed(123)
fit <- neuralnet::neuralnet(formula = case~age+parity+induced+spontaneous,
data = infert, hidden = 3,
learningrate = 0.01,
algorithm = "backprop",
err.fct = "ce",
linear.output = FALSE,
lifesign = 'full',
lifesign.step = 100)
preds <- neuralnet::compute(fit, infert[,c("age","parity","induced","spontaneous")])$net.result
summary(preds)
V1
Min. :0.3347060
1st Qu.:0.3347158
Median :0.3347161
Mean :0.3347158
3rd Qu.:0.3347162
Max. :0.3347286
这里的某些设置应该有所不同吗?
示例默认神经网络
set.seed(123)
fit <- neuralnet::neuralnet(formula = case~age+parity+induced+spontaneous,
data = infert, hidden = 3,
err.fct = "ce",
linear.output = FALSE,
lifesign = 'full',
lifesign.step = 100)
preds <- neuralnet::compute(fit, infert[,c("age","parity","induced","spontaneous")])$net.result
summary(preds)
V1
Min. :0.1360947
1st Qu.:0.1516387
Median :0.1984035
Mean :0.3346734
3rd Qu.:0.4838288
Max. :1.0000000
建议您在输入神经网络之前对数据进行归一化。如果你这样做了,那么你就可以开始了:
library(neuralnet)
data(infert)
set.seed(123)
infert[,c('age','parity','induced','spontaneous')] <- scale(infert[,c('age','parity','induced','spontaneous')])
fit <- neuralnet::neuralnet(formula = case~age+parity+induced+spontaneous,
data = infert, hidden = 3,
learningrate = 0.01,
algorithm = "backprop",
err.fct = "ce",
linear.output = FALSE,
lifesign = 'full',
lifesign.step = 100)
preds <- neuralnet::compute(fit, infert[,c("age","parity","induced","spontaneous")])$net.result
summary(preds)
V1
Min. :0.02138785
1st Qu.:0.21002456
Median :0.21463423
Mean :0.33471568
3rd Qu.:0.47239818
Max. :0.97874839
关于 SO 处理这个问题实际上有几个问题。 Why do we have to normalize the input for an artificial neural network? 似乎有一些最详细的内容。
我正在尝试 neuralnet
包中的不同算法,但是当我尝试传统的 backprop
算法时,结果非常 strange/disappointing。几乎所有的计算结果都是~.33???我假设我一定是错误地使用了算法,就好像我 运行 它与默认 rprop+
它确实区分样本一样。当然,正常的反向传播并没有那么糟糕,特别是如果它能够如此迅速地收敛到提供的阈值。
library(neuralnet)
data(infert)
set.seed(123)
fit <- neuralnet::neuralnet(formula = case~age+parity+induced+spontaneous,
data = infert, hidden = 3,
learningrate = 0.01,
algorithm = "backprop",
err.fct = "ce",
linear.output = FALSE,
lifesign = 'full',
lifesign.step = 100)
preds <- neuralnet::compute(fit, infert[,c("age","parity","induced","spontaneous")])$net.result
summary(preds)
V1
Min. :0.3347060
1st Qu.:0.3347158
Median :0.3347161
Mean :0.3347158
3rd Qu.:0.3347162
Max. :0.3347286
这里的某些设置应该有所不同吗?
示例默认神经网络
set.seed(123)
fit <- neuralnet::neuralnet(formula = case~age+parity+induced+spontaneous,
data = infert, hidden = 3,
err.fct = "ce",
linear.output = FALSE,
lifesign = 'full',
lifesign.step = 100)
preds <- neuralnet::compute(fit, infert[,c("age","parity","induced","spontaneous")])$net.result
summary(preds)
V1
Min. :0.1360947
1st Qu.:0.1516387
Median :0.1984035
Mean :0.3346734
3rd Qu.:0.4838288
Max. :1.0000000
建议您在输入神经网络之前对数据进行归一化。如果你这样做了,那么你就可以开始了:
library(neuralnet)
data(infert)
set.seed(123)
infert[,c('age','parity','induced','spontaneous')] <- scale(infert[,c('age','parity','induced','spontaneous')])
fit <- neuralnet::neuralnet(formula = case~age+parity+induced+spontaneous,
data = infert, hidden = 3,
learningrate = 0.01,
algorithm = "backprop",
err.fct = "ce",
linear.output = FALSE,
lifesign = 'full',
lifesign.step = 100)
preds <- neuralnet::compute(fit, infert[,c("age","parity","induced","spontaneous")])$net.result
summary(preds)
V1
Min. :0.02138785
1st Qu.:0.21002456
Median :0.21463423
Mean :0.33471568
3rd Qu.:0.47239818
Max. :0.97874839
关于 SO 处理这个问题实际上有几个问题。 Why do we have to normalize the input for an artificial neural network? 似乎有一些最详细的内容。