sparse_softmax_cross_entropy_with_logits 结果比 softmax_cross_entropy_with_logits 差
sparse_softmax_cross_entropy_with_logits results is worse than softmax_cross_entropy_with_logits
我用 tensorflow 实现了经典的图像分类问题,我有 9 个 类,首先我使用 softmax_cross_entropy_with_logits
作为分类器和训练网络,经过一些步骤后它给出了大约 99% 的训练准确率,
然后用sparse_softmax_cross_entropy_with_logits
测试同样的问题这次它根本不收敛,(训练精度在0.10和0.20左右)
仅供参考,对于 softmax_cross_entropy_with_logits
,我使用 [batch_size,num_classes] 和 dtype float32 作为标签,对于 sparse_softmax_cross_entropy_with_logits
我使用 [batch_size] 标签为 dtype int32。
有人知道吗?
更新:
this is code:
def costFun(self):
self.y_ = tf.reshape(self.y_, [-1])
return tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(self.score_, self.y_))
def updateFun(self):
return tf.train.AdamOptimizer(learning_rate = self.lr_).minimize(self.cost_)
def perfFun(self):
correct_pred = tf.equal(tf.argmax(self.score_,1), tf.argmax(y,1))
return(tf.reduce_mean(tf.cast(correct_pred, tf.float32)))
def __init__(self,x,y,lr,lyr1FilterNo,lyr2FilterNo,lyr3FilterNo,fcHidLyrSize,inLyrSize,outLyrSize, keepProb):
self.x_ = x
self.y_ = y
self.lr_ = lr
self.inLyrSize = inLyrSize
self.outLyrSize_ = outLyrSize
self.lyr1FilterNo_ = lyr1FilterNo
self.lyr2FilterNo_ = lyr2FilterNo
self.lyr3FilterNo_ = lyr3FilterNo
self.fcHidLyrSize_ = fcHidLyrSize
self.keepProb_ = keepProb
[self.params_w_, self.params_b_] = ConvNet.paramsFun(self)
self.score_, self.PackShow_ = ConvNet.scoreFun (self)
self.cost_ = ConvNet.costFun (self)
self.update_ = ConvNet.updateFun(self)
self.perf_ = ConvNet.perfFun (self)
主要内容:
lyr1FilterNo = 32
lyr2FilterNo = 64
lyr3FilterNo = 128
fcHidLyrSize = 1024
inLyrSize = 32 * 32
outLyrSize = 9
lr = 0.001
batch_size = 300
dropout = 0.5
x = tf.placeholder(tf.float32, [None, inLyrSize ])
y = tf.placeholder(tf.int32, None )
ConvNet_class = ConvNet(x,y,lr,lyr1FilterNo,lyr2FilterNo,lyr3FilterNo,fcHidLyrSize,inLyrSize,outLyrSize, keepProb)
initVar = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(initVar)
for step in range(10000):
trData_i = np.reshape( trData_i , ( -1, 32 * 32 ) )
trLabel_i = np.reshape( trLabel_i, ( -1, 1 ) )
update_i, PackShow, wLyr1_i, wLyr2_i, wLyr3_i = sess.run([ConvNet_class.update_, ConvNet_class.PackShow_,
ConvNet_class.params_w_['wLyr1'], ConvNet_class.params_w_['wLyr2'], ConvNet_class.params_w_['wLyr3']],
feed_dict = { x:trData_i, y:trLabel_i, keepProb:dropout} )
我找到了问题,感谢@mrry 的帮助评论,实际上我对准确性的计算有误,事实上,"sparse_softmax" 和 "softmax" 具有相同的输入损失(或成本)对数,
为了计算精度,我改成
correct_pred = tf.equal(tf.argmax(self.score_,1), tf.argmax(y,1))
至
correct_pred = tf.equal(tf.argmax(self.score_,1), y ))
因为在 "sparse_softmax" 中,ground truth 标签不是单热向量格式,而是真正的 int32 或 int64 数字。
我用 tensorflow 实现了经典的图像分类问题,我有 9 个 类,首先我使用 softmax_cross_entropy_with_logits
作为分类器和训练网络,经过一些步骤后它给出了大约 99% 的训练准确率,
然后用sparse_softmax_cross_entropy_with_logits
测试同样的问题这次它根本不收敛,(训练精度在0.10和0.20左右)
仅供参考,对于 softmax_cross_entropy_with_logits
,我使用 [batch_size,num_classes] 和 dtype float32 作为标签,对于 sparse_softmax_cross_entropy_with_logits
我使用 [batch_size] 标签为 dtype int32。
有人知道吗?
更新:
this is code:
def costFun(self):
self.y_ = tf.reshape(self.y_, [-1])
return tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(self.score_, self.y_))
def updateFun(self):
return tf.train.AdamOptimizer(learning_rate = self.lr_).minimize(self.cost_)
def perfFun(self):
correct_pred = tf.equal(tf.argmax(self.score_,1), tf.argmax(y,1))
return(tf.reduce_mean(tf.cast(correct_pred, tf.float32)))
def __init__(self,x,y,lr,lyr1FilterNo,lyr2FilterNo,lyr3FilterNo,fcHidLyrSize,inLyrSize,outLyrSize, keepProb):
self.x_ = x
self.y_ = y
self.lr_ = lr
self.inLyrSize = inLyrSize
self.outLyrSize_ = outLyrSize
self.lyr1FilterNo_ = lyr1FilterNo
self.lyr2FilterNo_ = lyr2FilterNo
self.lyr3FilterNo_ = lyr3FilterNo
self.fcHidLyrSize_ = fcHidLyrSize
self.keepProb_ = keepProb
[self.params_w_, self.params_b_] = ConvNet.paramsFun(self)
self.score_, self.PackShow_ = ConvNet.scoreFun (self)
self.cost_ = ConvNet.costFun (self)
self.update_ = ConvNet.updateFun(self)
self.perf_ = ConvNet.perfFun (self)
主要内容:
lyr1FilterNo = 32
lyr2FilterNo = 64
lyr3FilterNo = 128
fcHidLyrSize = 1024
inLyrSize = 32 * 32
outLyrSize = 9
lr = 0.001
batch_size = 300
dropout = 0.5
x = tf.placeholder(tf.float32, [None, inLyrSize ])
y = tf.placeholder(tf.int32, None )
ConvNet_class = ConvNet(x,y,lr,lyr1FilterNo,lyr2FilterNo,lyr3FilterNo,fcHidLyrSize,inLyrSize,outLyrSize, keepProb)
initVar = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(initVar)
for step in range(10000):
trData_i = np.reshape( trData_i , ( -1, 32 * 32 ) )
trLabel_i = np.reshape( trLabel_i, ( -1, 1 ) )
update_i, PackShow, wLyr1_i, wLyr2_i, wLyr3_i = sess.run([ConvNet_class.update_, ConvNet_class.PackShow_,
ConvNet_class.params_w_['wLyr1'], ConvNet_class.params_w_['wLyr2'], ConvNet_class.params_w_['wLyr3']],
feed_dict = { x:trData_i, y:trLabel_i, keepProb:dropout} )
我找到了问题,感谢@mrry 的帮助评论,实际上我对准确性的计算有误,事实上,"sparse_softmax" 和 "softmax" 具有相同的输入损失(或成本)对数,
为了计算精度,我改成
correct_pred = tf.equal(tf.argmax(self.score_,1), tf.argmax(y,1))
至
correct_pred = tf.equal(tf.argmax(self.score_,1), y ))
因为在 "sparse_softmax" 中,ground truth 标签不是单热向量格式,而是真正的 int32 或 int64 数字。