Sigmoid 函数预测在导出到 DF 时生成连续数和错误
Sigmoid function prediction generates continuous number and error when exported to DF
我是 tensorflow 的新手,所以我试图通过在 kaggle 上处理二进制分类问题来弄脏我的手。我已经使用 sigmoid 函数训练了模型并且在测试时获得了非常好的准确性但是当我尝试将预测导出到 df 以提交时,我得到以下错误......我附上了代码和预测以及输出,拜托提示我做错了什么,我怀疑这与我的 sigmoid 函数有关,谢谢。
This is output of the predictions....the expected is 1s and 0s
INFO:tensorflow:Restoring parameters from ./movie_review_variables
Prections are [[3.8743019e-07]
[9.9999821e-01]
[1.7650980e-01]
...
[9.9997473e-01]
[1.4901161e-07]
[7.0333481e-06]]
#Importing tensorflow
import tensorflow as tf
#defining hyperparameters
learning_rate = 0.01
training_epochs = 1000
batch_size = 100
num_labels = 2
num_features = 5000
train_size = 20000
#defining the placeholders and encoding the y placeholder
X = tf.placeholder(tf.float32, shape=[None, num_features])
Y = tf.placeholder(tf.int32, shape=[None])
y_oneHot = tf.one_hot(Y, 1)
#defining the model parameters -- weight and bias
W = tf.Variable(tf.zeros([num_features, 1]))
b = tf.Variable(tf.zeros([1]))
#defining the sigmoid model and setting up the learning algorithm
y_model = tf.nn.sigmoid(tf.add(tf.matmul(X, W), b))
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=y_model, labels=y_oneHot)
train_optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
#defining operation to measure success rate
correct_prediction = tf.equal(tf.argmax(y_model, 1), tf.argmax(y_oneHot, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#saving variables
saver = tf.train.Saver()
#executing the graph and saving the model variables
with tf.Session() as sess: #new session
tf.global_variables_initializer().run()
#Iteratively updating parameter batch by batch
for step in range(training_epochs * train_size // batch_size):
offset = (step * batch_size) % train_size
batch_xs = x_train[offset:(offset + batch_size), :]
batch_labels = y_train[offset:(offset + batch_size)]
#run optimizer on batch
err, _ = sess.run([cost, train_optimizer], feed_dict={X:batch_xs, Y:batch_labels})
if step % 1000 ==0:
print(step, err) #print ongoing result
#Print final learned parameters
w_val = sess.run(W)
print('w', w_val)
b_val = sess.run(b)
print('b', b_val)
print('Accuracy', accuracy.eval(feed_dict={X:x_test, Y:y_test}))
save_path = saver.save(sess, './movie_review_variables')
print('Model saved in path {}'.format(save_path))
#creating csv file for kaggle submission
with tf.Session() as sess:
saver.restore(sess, './movie_review_variables')
predictions = sess.run(y_model, feed_dict={X: test_data_features})
subm2 = pd.DataFrame(data={'id':test['id'],'sentiment':predictions})
subm2.to_csv('subm2nlp.csv', index=False, quoting=3)
print("I am done predicting")
INFO:tensorflow:Restoring parameters from ./movie_review_variables
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-85-fd74ed82109c> in <module>()
5 # print('Prections are {}'.format(predictions))
6
----> 7 subm2 = pd.DataFrame(data={'id':test['id'], 'sentiment':predictions})
8 subm2.to_csv('subm2nlp.csv', index=False, quoting=3)
9 print("I am done predicting")
Exception: Data must be 1-dimensional
你可以看到sigmoid function的定义。这将始终有一个连续的输出。如果您想离散化您的输出,您需要确定一些阈值,高于该阈值您将解决方案设置为 1,低于该阈值将为零。
pred = tf.math.greater(y_model, tf.constant(0.5))
但是,您必须谨慎选择合适的阈值,因为它不能保证您的模型会很好地校准概率。您可以根据对某些 held-out 验证集的最佳区分选择合适的阈值。
重要的是此步骤仅用于评估,因为您将无法通过此操作反向传播损失信号。
您需要为 S 形输出设置一些阈值。例如。将输出分成 bins,它们之间的 space 为 0.5:
>>> import numpy as np
>>> x = np.linspace(0, 10, 20)
>>> x
array([ 0. , 0.52631579, 1.05263158, 1.57894737, 2.10526316,
2.63157895, 3.15789474, 3.68421053, 4.21052632, 4.73684211,
5.26315789, 5.78947368, 6.31578947, 6.84210526, 7.36842105,
7.89473684, 8.42105263, 8.94736842, 9.47368421, 10. ])
>>> q = 0.5 # The continuous value between two discrete points
>>> y = q * np.round(x/q)
>>> y
array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5.5,
6. , 6.5, 7. , 7.5, 8. , 8.5, 9. , 9.5, 10. ])
我是 tensorflow 的新手,所以我试图通过在 kaggle 上处理二进制分类问题来弄脏我的手。我已经使用 sigmoid 函数训练了模型并且在测试时获得了非常好的准确性但是当我尝试将预测导出到 df 以提交时,我得到以下错误......我附上了代码和预测以及输出,拜托提示我做错了什么,我怀疑这与我的 sigmoid 函数有关,谢谢。
This is output of the predictions....the expected is 1s and 0s
INFO:tensorflow:Restoring parameters from ./movie_review_variables
Prections are [[3.8743019e-07]
[9.9999821e-01]
[1.7650980e-01]
...
[9.9997473e-01]
[1.4901161e-07]
[7.0333481e-06]]
#Importing tensorflow
import tensorflow as tf
#defining hyperparameters
learning_rate = 0.01
training_epochs = 1000
batch_size = 100
num_labels = 2
num_features = 5000
train_size = 20000
#defining the placeholders and encoding the y placeholder
X = tf.placeholder(tf.float32, shape=[None, num_features])
Y = tf.placeholder(tf.int32, shape=[None])
y_oneHot = tf.one_hot(Y, 1)
#defining the model parameters -- weight and bias
W = tf.Variable(tf.zeros([num_features, 1]))
b = tf.Variable(tf.zeros([1]))
#defining the sigmoid model and setting up the learning algorithm
y_model = tf.nn.sigmoid(tf.add(tf.matmul(X, W), b))
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=y_model, labels=y_oneHot)
train_optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
#defining operation to measure success rate
correct_prediction = tf.equal(tf.argmax(y_model, 1), tf.argmax(y_oneHot, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#saving variables
saver = tf.train.Saver()
#executing the graph and saving the model variables
with tf.Session() as sess: #new session
tf.global_variables_initializer().run()
#Iteratively updating parameter batch by batch
for step in range(training_epochs * train_size // batch_size):
offset = (step * batch_size) % train_size
batch_xs = x_train[offset:(offset + batch_size), :]
batch_labels = y_train[offset:(offset + batch_size)]
#run optimizer on batch
err, _ = sess.run([cost, train_optimizer], feed_dict={X:batch_xs, Y:batch_labels})
if step % 1000 ==0:
print(step, err) #print ongoing result
#Print final learned parameters
w_val = sess.run(W)
print('w', w_val)
b_val = sess.run(b)
print('b', b_val)
print('Accuracy', accuracy.eval(feed_dict={X:x_test, Y:y_test}))
save_path = saver.save(sess, './movie_review_variables')
print('Model saved in path {}'.format(save_path))
#creating csv file for kaggle submission
with tf.Session() as sess:
saver.restore(sess, './movie_review_variables')
predictions = sess.run(y_model, feed_dict={X: test_data_features})
subm2 = pd.DataFrame(data={'id':test['id'],'sentiment':predictions})
subm2.to_csv('subm2nlp.csv', index=False, quoting=3)
print("I am done predicting")
INFO:tensorflow:Restoring parameters from ./movie_review_variables
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-85-fd74ed82109c> in <module>()
5 # print('Prections are {}'.format(predictions))
6
----> 7 subm2 = pd.DataFrame(data={'id':test['id'], 'sentiment':predictions})
8 subm2.to_csv('subm2nlp.csv', index=False, quoting=3)
9 print("I am done predicting")
Exception: Data must be 1-dimensional
你可以看到sigmoid function的定义。这将始终有一个连续的输出。如果您想离散化您的输出,您需要确定一些阈值,高于该阈值您将解决方案设置为 1,低于该阈值将为零。
pred = tf.math.greater(y_model, tf.constant(0.5))
但是,您必须谨慎选择合适的阈值,因为它不能保证您的模型会很好地校准概率。您可以根据对某些 held-out 验证集的最佳区分选择合适的阈值。
重要的是此步骤仅用于评估,因为您将无法通过此操作反向传播损失信号。
您需要为 S 形输出设置一些阈值。例如。将输出分成 bins,它们之间的 space 为 0.5:
>>> import numpy as np
>>> x = np.linspace(0, 10, 20)
>>> x
array([ 0. , 0.52631579, 1.05263158, 1.57894737, 2.10526316,
2.63157895, 3.15789474, 3.68421053, 4.21052632, 4.73684211,
5.26315789, 5.78947368, 6.31578947, 6.84210526, 7.36842105,
7.89473684, 8.42105263, 8.94736842, 9.47368421, 10. ])
>>> q = 0.5 # The continuous value between two discrete points
>>> y = q * np.round(x/q)
>>> y
array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5.5,
6. , 6.5, 7. , 7.5, 8. , 8.5, 9. , 9.5, 10. ])