为我的 python/numpy 神经网络实施偏置节点的最佳方法是什么?

What is the best way to implement a bias node to my python/numpy neural network?

我最初根据在线教程构建了一个只有 numpy 的神经网络,后来意识到我应该有某种偏置神经元。但是,我一直在努力弄清楚如何将其实现到我的代码中,非常感谢您提供指导。

import numpy as np

class NN():   
    def __init__(self, layers, type):
        """
        layers: a list of layers, eg:
              2 input neurons
              1 hidden layer of 3 neurons
              2 output neurons
              will look like [2,3,2]
        type: initialisation type, "random" or "uniform" distribution
        """

        self.p = 0.1

        self.layers = len(layers) - 1

        self.inputSize = layers[0]
        self.outputSize = layers[self.layers]

        self.layerSizes = layers[:-1] #input layer, hiddens, discard output layer

        self.inputs = np.zeros(self.inputSize, dtype=float)
        self.outputs = np.zeros(self.outputSize, dtype=float)

        self.L = {}

        if type == "random":
            for i in range(1,self.layers+1):
                if i < self.layers:
                    self.L[i] = (np.random.ranf(( self.layerSizes[i-1] , self.layerSizes[i] )).astype(np.float) - 0.5) * 2
                else:
                    self.L[i] = (np.random.ranf(( self.layerSizes[i-1] , self.outputSize )).astype(np.float) - 0.5)*2
        elif type == "uniform":            
            for i in range(1,self.layers+1):
                if i < self.layers:
                    self.L[i] = np.random.uniform( -1 , 1 , (self.layerSizes[i-1],self.layerSizes[i]) )
                else:
                    self.L[i] = np.random.uniform( -1 , 1 , (self.layerSizes[i-1],self.outputSize) )

        else:
            print("unknown initialization type")

    def updateS(self): #forward propogation Sigmoid
        for i in range(1,self.layers+1):
            if 1 == self.layers:  #dodgy no hidden layers fix
                self.z = np.dot(self.inputs, self.L[i])
                self.outputs = ( self.sigmoid(self.z) - 0.5)*2           
            elif i == 1:  #input layer
                self.z = np.dot(self.inputs, self.L[i])
                self.temp = self.sigmoid(self.z)
            elif i < self.layers: #hidden layers
                self.z = np.dot(self.temp, self.L[i])
                self.temp = self.sigmoid(self.z)
            else: #output layer
                self.z = np.dot(self.temp, self.L[i])
                self.outputs = ( self.sigmoid(self.z) - 0.5)*2

    def sigmoid(self, s):
        #activation funtion
        return 1/(1+np.exp(-s/self.p))

偏差只是您在神经网络前馈过程中添加到每个神经元的变量。因此,从一个神经元层到下一个神经元层的前馈过程将是所有权重的总和乘以前一个神经元馈入下一个神经元,然后将添加该神经元的偏差,或者:

output = sum(weights * inputs) + bias

为了说明这一点,请看下图:

其中:

X1: Input value 1.
X2: Input value 2.
B1n: Layer 1, neuron n bias.
H1: Hidden layer neuron 1.
H2: Hidden layer neuron 2.
a(…): activation function.
B2n: Layer 2, neuron n bias.
Y1: network output neuron 1.
Y2: network output neuron 2.
Y1out: network output 1.
Y2out: network output 2.
T1: Training output 1.
T2: Training output 2.

在计算 H1 时,您需要使用以下公式:

H1 = (X1 * W1) + (X2 * W2) + B11    

请注意,这是在通过激活函数完全计算出神经元的值之前。

因此,我很确定偏差将输入到前馈函数中:

def updateS(self): #forward propogation Sigmoid
        for i in range(1,self.layers+1):
            if 1 == self.layers:  #dodgy no hidden layers fix
                self.z = np.dot(self.inputs, self.L[i])
                self.outputs = ( self.sigmoid(self.z) - 0.5)*2           
            elif i == 1:  #input layer
                self.z = np.dot(self.inputs, self.L[i])
                self.temp = self.sigmoid(self.z)
            elif i < self.layers: #hidden layers
                self.z = np.dot(self.temp, self.L[i])
                self.temp = self.sigmoid(self.z)
            else: #output layer
                self.z = np.dot(self.temp, self.L[i])
                self.outputs = ( self.sigmoid(self.z) - 0.5)*2

通过在 self.z 值的末尾添加一个值。我认为这些值可以是任何你想要的值,因为偏差只是移动线性方程的截距