与门单层神经网络与tensorflow

AND gate single layer neural network with tensorflow

我正在按照本教程 (https://medium.com/@jaschaephraim/elementary-neural-networks-with-tensorflow-c2593ad3d60b) 使用单层神经网络实现与门。完整代码如下:

import tensorflow as tf
T, F = 1., -1.
bias = 1.
train_in = [
    [T, T, bias],
    [T, F, bias],
    [F, T, bias],
    [F, F, bias],
    ]
train_out = [
    [T],
    [F],
    [F],
    [F],
]

w = tf.Variable(tf.random.normal([3, 1]))

def step(x):
    is_greater = tf.greater(x, 0)
    as_float = tf.cast(is_greater, dtype=tf.float32)
    doubled = tf.multiply(as_float, 2)
    return tf.subtract(doubled, 1)

output = step(tf.matmul(train_in, w))
error = tf.subtract(train_out, output)
mse = tf.reduce_mean(tf.square(error))

delta = tf.matmul(train_in, error, transpose_a=True)
train = tf.compat.v1.assign_add(w, tf.add(w, delta))

sess =tf.compat.v1.Session()
sess.run(tf.compat.v1.initialize_all_variables())

err, target = 1, 0
epoch, max_epochs = 0, 10 
while err > target and epoch < max_epochs:
    epoch += 1
    err, _ = sess.run([mse, train])
    print('epoch:', epoch, 'mse:', err)

但是,我无法理解以下几行:

delta = tf.matmul(train_in, error, transpose_a=True)
train = tf.compat.v1.assign_add(w, tf.add(w, delta))

谁能从数学上(如果可能的话用矩阵)解释上面两行的目的(特别是为什么 delta 是这样计算的)?

好的。似乎有不同的感知器权重更新规则,具体取决于您是否使用了可微分的激活函数,请参见幻灯片 7 和 15:https://slideplayer.com/slide/12558216/