如何在 python 中找到分类神经网络的预测输出?
How to find the predicted output of a classification neural network in python?
我是 python 和学习神经网络的新手。我有一个经过训练的 3 层前馈神经网络,隐藏层有 2 个神经元,输出层有 3 个神经元。我想知道如何计算输出层值/预测输出
我从网络中提取了权重和偏差,并计算了隐藏层的激活值。我只是想确认如何使用 softmax
函数来计算输出层神经元的输出?
我的实现如下:
weights_from_hiddenLayer_to_OutputLayer = [
[x, y], # two weights connected to the output neuron 1 from hidden neurons 1 and 2
[a, b], # two weights connected to the output neuron 2 from hidden neurons 1 and 2
[c, d] # two weights connected to the output neuron 3 from hidden neurons 1 and 2I
]
# output layer biases extracted from the neural network
biases_output_layer = [a, b, c]
act1 = m # activation value of hidden neuron 1
act2 = n # activation value of hidden neuron 2
arr = []
for i, weights in enumerate(weights_from_hiddenLayer_to_OutputLayer):
arr.append(m*weights[0]+n*weights[1] +
biases_output_layer[i])
# i believe this will be the brightest neuron / predicted neural networks output ?
print(np.argmax(arr))
我在互联网上搜索了在 python 中使用 softmax
的方法,现在我找到了。我的预测输出与神经网络的预测大不相同。而我使用的是来自相同训练模型的完全相同的值。
您的输出将是 weights_from_hiddenLayer_to_OutputLayer
和先前激活的矩阵乘法。
然后可以通过softmax函数得到一个概率分布,按照你的猜测使用argmax
得到对应的class.
weights_from_hiddenLayer_to_OutputLayer = np.array([
[x, y], # two weights connected to the output neuron 1 from hidden neurons 1 and 2
[a, b], # two weights connected to the output neuron 2 from hidden neurons 1 and 2
[c, d] # two weights connected to the output neuron 3 from hidden neurons 1 and 2I
])
act = np.array([m, n])
biases_output_layer = [a, b, c]
arr = np.dot(weights_from_hiddenLayer_to_OutputLayer, act) # matrix multiplication of weights and activations
arr = arr + biases_output_layer
probability = np.exp(arr) / np.sum(np.exp(arr), axis=0) # softmax
print(np.argmax(probability))
请注意,从技术上讲,您不需要使用 softmax,除非您正在反向传播或试图评估输出的置信度,因为无论您是否传入,np.argmax()
的结果都是相同的arr
或对应的probability
.
我是 python 和学习神经网络的新手。我有一个经过训练的 3 层前馈神经网络,隐藏层有 2 个神经元,输出层有 3 个神经元。我想知道如何计算输出层值/预测输出
我从网络中提取了权重和偏差,并计算了隐藏层的激活值。我只是想确认如何使用 softmax
函数来计算输出层神经元的输出?
我的实现如下:
weights_from_hiddenLayer_to_OutputLayer = [
[x, y], # two weights connected to the output neuron 1 from hidden neurons 1 and 2
[a, b], # two weights connected to the output neuron 2 from hidden neurons 1 and 2
[c, d] # two weights connected to the output neuron 3 from hidden neurons 1 and 2I
]
# output layer biases extracted from the neural network
biases_output_layer = [a, b, c]
act1 = m # activation value of hidden neuron 1
act2 = n # activation value of hidden neuron 2
arr = []
for i, weights in enumerate(weights_from_hiddenLayer_to_OutputLayer):
arr.append(m*weights[0]+n*weights[1] +
biases_output_layer[i])
# i believe this will be the brightest neuron / predicted neural networks output ?
print(np.argmax(arr))
我在互联网上搜索了在 python 中使用 softmax
的方法,现在我找到了。我的预测输出与神经网络的预测大不相同。而我使用的是来自相同训练模型的完全相同的值。
您的输出将是 weights_from_hiddenLayer_to_OutputLayer
和先前激活的矩阵乘法。
然后可以通过softmax函数得到一个概率分布,按照你的猜测使用argmax
得到对应的class.
weights_from_hiddenLayer_to_OutputLayer = np.array([
[x, y], # two weights connected to the output neuron 1 from hidden neurons 1 and 2
[a, b], # two weights connected to the output neuron 2 from hidden neurons 1 and 2
[c, d] # two weights connected to the output neuron 3 from hidden neurons 1 and 2I
])
act = np.array([m, n])
biases_output_layer = [a, b, c]
arr = np.dot(weights_from_hiddenLayer_to_OutputLayer, act) # matrix multiplication of weights and activations
arr = arr + biases_output_layer
probability = np.exp(arr) / np.sum(np.exp(arr), axis=0) # softmax
print(np.argmax(probability))
请注意,从技术上讲,您不需要使用 softmax,除非您正在反向传播或试图评估输出的置信度,因为无论您是否传入,np.argmax()
的结果都是相同的arr
或对应的probability
.