张量流中的单神经元前馈网络

single neuron feed forward network in tensorflow

我做了一个前馈单神经元网络。预测打印 0.5 而它应该打印 0.0。我对张量流很陌生。请帮我。这是我的代码:

"""
O---(w1)-\
          \
O---(w2)-->Sum ---> Sigmoid ---> O  3 inputs and 1 output
          /
O---(w3)-/

          |   Input     | Output
Example 1 | 0   0   1   |   0   
Example 2 | 1   1   1   |   1
Example 3 | 1   0   1   |   1
Exmaple 4 | 0   1   1   |   0

"""

import tensorflow as tf

features = tf.placeholder(tf.float32, [None, 3])
labels = tf.placeholder(tf.float32, [None])

#Random weights
W = tf.Variable([[-0.16595599], [0.44064899], [-0.99977125]], tf.float32)

init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)

predict = tf.nn.sigmoid(tf.matmul(features, W))

error = labels - predict

# Training
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(error)

for i in range(10000):
    sess.run(train, feed_dict={features: [[0, 1, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]], labels: [0, 1, 1, 0]})

training_cost = sess.run(error, feed_dict={features: [[0, 1, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]], labels: [0, 1, 1, 0]})
print('Training cost = ', training_cost, 'W = ', sess.run(W))

print(sess.run(predict, feed_dict={features:[[0, 1, 1]]}))

我也只使用 numpy 手动制作了这个模型,效果很好。

编辑:我已经尝试了所有类型的成本函数,包括 tf.reduce_mean(predict-labels)**2)

你有两个错误

(a) 你原来的误差函数优化错了objective

(b) 你的目标向量被转置 以下行使其可见 print(sess.run(predict-label, feed_dict={features: [[0, 1, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]], labels: [0, 1, 1, 0]})}))

结果是一个 4x4 矩阵。

您可以使用以下代码达到预期效果

import tensorflow as tf

features = tf.placeholder(tf.float32, [None, 3])
labels = tf.placeholder(tf.float32, [None,1])

#Random weights
W = tf.Variable([[10.0], [000.0], [0.200]], tf.float32)
init = tf.initialize_all_variables()
with tf.Session() as sess:
    sess.run(init)

    predict = tf.nn.sigmoid(tf.matmul(features, W))

    print(sess.run(predict, feed_dict={features:[[0, 1, 1]]}))

    lbls= [[0], [1], [1], [0]]
    print(sess.run(predict,
                 feed_dict={features: [[0, 1, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]], labels:lbls}))


    #    error = labels - predict
    error = tf.reduce_mean((labels - predict)**2) 
    # Training
    optimizer = tf.train.GradientDescentOptimizer(10)
    train = optimizer.minimize(error)

    for i in range(100):
        sess.run(train,
        feed_dict={features: [[0, 1, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]], labels: lbls})
        training_cost = sess.run(error,
                             feed_dict={features: [[0, 1, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]],
                                        labels: lbls})
        classe = sess.run((labels-predict),
                             feed_dict={features: [[0, 1, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]],
                                        labels: lbls})
        print('Training cost = ', training_cost, 'W = ', classe)

    print(sess.run(predict,
                 feed_dict={features: [[0, 1, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]]}))