Python 中的 MXNet 分类总是给出相同的预测
MXNet classification in Python always gives the same predictions
我在这里尝试的是,通过使用提供的网球比赛统计数据集 here 作为输入,我制作了一个预测比赛结果(1 或 0)的神经网络模型。
根据官方的mxnet文档,我开发了下面的程序。
我尝试了各种配置参数,例如batch_size、unit_size、act_type、learning_rate,但无论我能想到什么样的修改,我得到的准确性总是在 0.5 左右,并且总是预测全 1 或全 0。
import numpy as np
from sklearn.preprocessing import normalize
import mxnet as mx
import logging
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
logging.basicConfig(level=logging.DEBUG, format='%(asctime)-15s %(message)s')
batch_size = 100
train_data = np.loadtxt("dm.csv",delimiter=",")
train_data = normalize(train_data, norm='l1', axis=0)
train_lbl = np.loadtxt("dm_lbl.csv",delimiter=",")
eval_data = np.loadtxt("dw.csv",delimiter=",")
eval_data = normalize(eval_data, norm='l1', axis=0)
eval_lbl = np.loadtxt("dw_lbl.csv",delimiter=",")
train_iter = mx.io.NDArrayIter(train_data, train_lbl, batch_size=batch_size, shuffle=True)
val_iter = mx.io.NDArrayIter(eval_data, eval_lbl, batch_size=batch_size)
data = mx.sym.var('data')
# The first fully-connected layer and the corresponding activation function
fc1 = mx.sym.FullyConnected(data=data, num_hidden=220)
#bn1 = mx.sym.BatchNorm(data = fc1, name="bn1")
act1 = mx.sym.Activation(data=fc1, act_type="sigmoid")
# The second fully-connected layer and the corresponding activation function
fc2 = mx.sym.FullyConnected(data=act1, num_hidden = 220)
#bn2 = mx.sym.BatchNorm(data = fc2, name="bn2")
act2 = mx.sym.Activation(data=fc2, act_type="sigmoid")
# The third fully-connected layer and the corresponding activation function
fc3 = mx.sym.FullyConnected(data=act2, num_hidden = 110)
#bn3 = mx.sym.BatchNorm(data = fc3, name="bn3")
act3 = mx.sym.Activation(data=fc3, act_type="sigmoid")
# output class(es)
fc4 = mx.sym.FullyConnected(data=act3, num_hidden=2)
# Softmax with cross entropy loss
mlp = mx.sym.SoftmaxOutput(data=fc4, name='softmax')
mod = mx.mod.Module(symbol=mlp,
context=mx.cpu(),
data_names=['data'],
label_names=['softmax_label'])
mod.fit(train_iter,
eval_data=val_iter,
optimizer='sgd',
optimizer_params={'learning_rate':0.03},
eval_metric='rmse',
num_epoch=10,
batch_end_callback = mx.callback.Speedometer(batch_size, 100)) # output progress for each 200 data batches)
prob = mod.predict(val_iter).asnumpy()
#print(prob)
for unit in prob:
print 'Classified as %d with probability %f' % (unit.argmax(), max(unit))
这是日志输出:
2017-06-19 17:18:34,961 Epoch[0] Train-rmse=0.500574
2017-06-19 17:18:34,961 Epoch[0] Time cost=0.007
2017-06-19 17:18:34,968 Epoch[0] Validation-rmse=0.500284
2017-06-19 17:18:34,975 Epoch[1] Train-rmse=0.500703
2017-06-19 17:18:34,975 Epoch[1] Time cost=0.007
2017-06-19 17:18:34,982 Epoch[1] Validation-rmse=0.500301
2017-06-19 17:18:34,990 Epoch[2] Train-rmse=0.500713
2017-06-19 17:18:34,990 Epoch[2] Time cost=0.008
2017-06-19 17:18:34,998 Epoch[2] Validation-rmse=0.500302
2017-06-19 17:18:35,005 Epoch[3] Train-rmse=0.500713
2017-06-19 17:18:35,005 Epoch[3] Time cost=0.007
2017-06-19 17:18:35,012 Epoch[3] Validation-rmse=0.500302
2017-06-19 17:18:35,019 Epoch[4] Train-rmse=0.500713
2017-06-19 17:18:35,019 Epoch[4] Time cost=0.007
2017-06-19 17:18:35,027 Epoch[4] Validation-rmse=0.500302
2017-06-19 17:18:35,035 Epoch[5] Train-rmse=0.500713
2017-06-19 17:18:35,035 Epoch[5] Time cost=0.008
2017-06-19 17:18:35,042 Epoch[5] Validation-rmse=0.500302
2017-06-19 17:18:35,049 Epoch[6] Train-rmse=0.500713
2017-06-19 17:18:35,049 Epoch[6] Time cost=0.007
2017-06-19 17:18:35,056 Epoch[6] Validation-rmse=0.500302
2017-06-19 17:18:35,064 Epoch[7] Train-rmse=0.500712
2017-06-19 17:18:35,064 Epoch[7] Time cost=0.008
2017-06-19 17:18:35,071 Epoch[7] Validation-rmse=0.500302
2017-06-19 17:18:35,079 Epoch[8] Train-rmse=0.500712
2017-06-19 17:18:35,079 Epoch[8] Time cost=0.007
2017-06-19 17:18:35,085 Epoch[8] Validation-rmse=0.500301
2017-06-19 17:18:35,093 Epoch[9] Train-rmse=0.500712
2017-06-19 17:18:35,093 Epoch[9] Time cost=0.007
2017-06-19 17:18:35,099 Epoch[9] Validation-rmse=0.500301
Classified as 0 with probability 0.530638
Classified as 0 with probability 0.530638
Classified as 0 with probability 0.530638
.
.
.
Classified as 0 with probability 0.530638
谁能告诉我哪里错了?
python version == 2.7.10
mxnet == 0.10.0
numpy==1.12.0
我从数据集中删除了一些非信息列,headers,然后将其转换为 csv 格式。
train_data.shape == (491, 22)
train_lbl.shape == (491,)
eval_data.shape == (452, 22)
eval_lbl.shape == (452,)
网络定义似乎是正确的。您能否打印 train_iter 和 val_iter 以查看规范化后数据是否仍然符合您的预期?另外,您要从原始数据中删除哪些列?
我在这里尝试的是,通过使用提供的网球比赛统计数据集 here 作为输入,我制作了一个预测比赛结果(1 或 0)的神经网络模型。
根据官方的mxnet文档,我开发了下面的程序。 我尝试了各种配置参数,例如batch_size、unit_size、act_type、learning_rate,但无论我能想到什么样的修改,我得到的准确性总是在 0.5 左右,并且总是预测全 1 或全 0。
import numpy as np
from sklearn.preprocessing import normalize
import mxnet as mx
import logging
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
logging.basicConfig(level=logging.DEBUG, format='%(asctime)-15s %(message)s')
batch_size = 100
train_data = np.loadtxt("dm.csv",delimiter=",")
train_data = normalize(train_data, norm='l1', axis=0)
train_lbl = np.loadtxt("dm_lbl.csv",delimiter=",")
eval_data = np.loadtxt("dw.csv",delimiter=",")
eval_data = normalize(eval_data, norm='l1', axis=0)
eval_lbl = np.loadtxt("dw_lbl.csv",delimiter=",")
train_iter = mx.io.NDArrayIter(train_data, train_lbl, batch_size=batch_size, shuffle=True)
val_iter = mx.io.NDArrayIter(eval_data, eval_lbl, batch_size=batch_size)
data = mx.sym.var('data')
# The first fully-connected layer and the corresponding activation function
fc1 = mx.sym.FullyConnected(data=data, num_hidden=220)
#bn1 = mx.sym.BatchNorm(data = fc1, name="bn1")
act1 = mx.sym.Activation(data=fc1, act_type="sigmoid")
# The second fully-connected layer and the corresponding activation function
fc2 = mx.sym.FullyConnected(data=act1, num_hidden = 220)
#bn2 = mx.sym.BatchNorm(data = fc2, name="bn2")
act2 = mx.sym.Activation(data=fc2, act_type="sigmoid")
# The third fully-connected layer and the corresponding activation function
fc3 = mx.sym.FullyConnected(data=act2, num_hidden = 110)
#bn3 = mx.sym.BatchNorm(data = fc3, name="bn3")
act3 = mx.sym.Activation(data=fc3, act_type="sigmoid")
# output class(es)
fc4 = mx.sym.FullyConnected(data=act3, num_hidden=2)
# Softmax with cross entropy loss
mlp = mx.sym.SoftmaxOutput(data=fc4, name='softmax')
mod = mx.mod.Module(symbol=mlp,
context=mx.cpu(),
data_names=['data'],
label_names=['softmax_label'])
mod.fit(train_iter,
eval_data=val_iter,
optimizer='sgd',
optimizer_params={'learning_rate':0.03},
eval_metric='rmse',
num_epoch=10,
batch_end_callback = mx.callback.Speedometer(batch_size, 100)) # output progress for each 200 data batches)
prob = mod.predict(val_iter).asnumpy()
#print(prob)
for unit in prob:
print 'Classified as %d with probability %f' % (unit.argmax(), max(unit))
这是日志输出:
2017-06-19 17:18:34,961 Epoch[0] Train-rmse=0.500574
2017-06-19 17:18:34,961 Epoch[0] Time cost=0.007
2017-06-19 17:18:34,968 Epoch[0] Validation-rmse=0.500284
2017-06-19 17:18:34,975 Epoch[1] Train-rmse=0.500703
2017-06-19 17:18:34,975 Epoch[1] Time cost=0.007
2017-06-19 17:18:34,982 Epoch[1] Validation-rmse=0.500301
2017-06-19 17:18:34,990 Epoch[2] Train-rmse=0.500713
2017-06-19 17:18:34,990 Epoch[2] Time cost=0.008
2017-06-19 17:18:34,998 Epoch[2] Validation-rmse=0.500302
2017-06-19 17:18:35,005 Epoch[3] Train-rmse=0.500713
2017-06-19 17:18:35,005 Epoch[3] Time cost=0.007
2017-06-19 17:18:35,012 Epoch[3] Validation-rmse=0.500302
2017-06-19 17:18:35,019 Epoch[4] Train-rmse=0.500713
2017-06-19 17:18:35,019 Epoch[4] Time cost=0.007
2017-06-19 17:18:35,027 Epoch[4] Validation-rmse=0.500302
2017-06-19 17:18:35,035 Epoch[5] Train-rmse=0.500713
2017-06-19 17:18:35,035 Epoch[5] Time cost=0.008
2017-06-19 17:18:35,042 Epoch[5] Validation-rmse=0.500302
2017-06-19 17:18:35,049 Epoch[6] Train-rmse=0.500713
2017-06-19 17:18:35,049 Epoch[6] Time cost=0.007
2017-06-19 17:18:35,056 Epoch[6] Validation-rmse=0.500302
2017-06-19 17:18:35,064 Epoch[7] Train-rmse=0.500712
2017-06-19 17:18:35,064 Epoch[7] Time cost=0.008
2017-06-19 17:18:35,071 Epoch[7] Validation-rmse=0.500302
2017-06-19 17:18:35,079 Epoch[8] Train-rmse=0.500712
2017-06-19 17:18:35,079 Epoch[8] Time cost=0.007
2017-06-19 17:18:35,085 Epoch[8] Validation-rmse=0.500301
2017-06-19 17:18:35,093 Epoch[9] Train-rmse=0.500712
2017-06-19 17:18:35,093 Epoch[9] Time cost=0.007
2017-06-19 17:18:35,099 Epoch[9] Validation-rmse=0.500301
Classified as 0 with probability 0.530638
Classified as 0 with probability 0.530638
Classified as 0 with probability 0.530638
.
.
.
Classified as 0 with probability 0.530638
谁能告诉我哪里错了?
python version == 2.7.10
mxnet == 0.10.0
numpy==1.12.0
我从数据集中删除了一些非信息列,headers,然后将其转换为 csv 格式。
train_data.shape == (491, 22)
train_lbl.shape == (491,)
eval_data.shape == (452, 22)
eval_lbl.shape == (452,)
网络定义似乎是正确的。您能否打印 train_iter 和 val_iter 以查看规范化后数据是否仍然符合您的预期?另外,您要从原始数据中删除哪些列?