神经网络反向传播回归,如何正确学习cos函数?

Neural network back propagation regression, how to correctly learn the cos function?

经过Lutz Lehmann的建议,我发现是随机权重和偏差的问题。我用np.ramdom.seed(2021)指定随机种子数,误差一直没有收敛。但是如果我使用np.ramdom.seed(10)作为随机种子数,第600个ephoch误差会收敛到一个相对较小的数量。

Galletti_Lance的建议是正确的,应该换成周期激活函数。我扩大了sin函数的区间,学习误差没有converge.Sure够,是过拟合了。

input_data = np.arange(0, np.pi * 4, 0.1)  # input
correct_data = np.sin(input_data)  # correct answer
input_data = (input_data - np.pi*2) / np.pi 

np.random.seed(2021)学习cos函数,第20000个epoch如下:

Epoch:0/20001 Error:0.2904405534384431
Epoch:200/20001 Error:0.2752981376571506
Epoch:400/20001 Error:0.27356300803051226
Epoch:600/20001 Error:0.27409878767315193
Epoch:800/20001 Error:0.2638216736165815
Epoch:1000/20001 Error:0.27196157366033213
Epoch:1200/20001 Error:0.2743520487664953
Epoch:1400/20001 Error:0.2589745966244678
Epoch:1600/20001 Error:0.2705289192984957
Epoch:1800/20001 Error:0.2689693217636388
....
Epoch:20000/20001 Error:0.2678723095120438

但是如果我使用np.ramdom.seed(10)作为随机种子数,第600个ephoch误差会收敛到一个相对较小的数量。


Epoch:0/20001 Error:0.283958515549615
Epoch:200/20001 Error:0.260819823215878
Epoch:400/20001 Error:0.23267630899157743
Epoch:600/20001 Error:0.0022589485429890047
Epoch:800/20001 Error:0.0007425256677052262
Epoch:1000/20001 Error:0.0003946220094805989
....
Epoch:2800/20001 Error:0.00011495288247859594
Epoch:3000/20001 Error:9.989662843897715e-05
....
Epoch:20000/20001 Error:4.6146397913360866e-05

np.random.seed(10)学习cos函数,第600个epoch如下:



我使用神经网络反向传播回归来学习 cos 函数。当我学习 sin 函数时,它是正常的。如果改成cos就不正常了。有什么问题?

correct_data = np.cos(input_data)

相关设置:

1.The中间层的激活函数:sigmoid函数

2.Excitation输出层的函数:恒等函数

3.Loss 函数:误差平方和

4.Optimization算法:随机梯度下降法

5.Batch尺寸:1

我的代码如下:

import numpy as np
import matplotlib.pyplot as plt

# - Prepare to input and correct answer data -
input_data = np.arange(0, np.pi * 2, 0.1)  # input
correct_data = np.cos(input_data)  # correct answer
input_data = (input_data - np.pi) / np.pi  # Converge the input to the range of -1.0-1.0
n_data = len(correct_data)  # number of data

# - Each setting value -
n_in = 1  # The number of neurons in the input layer
n_mid = 3  # The number of neurons in the middle layer
n_out = 1  # The number of neurons in the output layer

wb_width = 0.01  # The spread of weights and biases
eta = 0.1  # learning coefficient
epoch = 2001
interval = 200  # Display progress interval practice


# -- middle layer --
class MiddleLayer:
    def __init__(self, n_upper, n):  # Initialize settings
        self.w = wb_width * np.random.randn(n_upper, n)  # weight (matrix)
        self.b = wb_width * np.random.randn(n)  # offset (vector)

    def forward(self, x):  # forward propagation
        self.x = x
        u = np.dot(x, self.w) + self.b
        self.y = 1 / (1 + np.exp(-u))  # Sigmoid function

    def backward(self, grad_y):  # Backpropagation
        delta = grad_y * (1 - self.y) * self.y  # Differentiation of Sigmoid function

        self.grad_w = np.dot(self.x.T, delta)
        self.grad_b = np.sum(delta, axis=0)

        self.grad_x = np.dot(delta, self.w.T)

    def update(self, eta):  # update of weight and bias
        self.w -= eta * self.grad_w
        self.b -= eta * self.grad_b


# - Output layer -
class OutputLayer:
    def __init__(self, n_upper, n):  # Initialize settings
        self.w = wb_width * np.random.randn(n_upper, n)  # weight (matrix)
        self.b = wb_width * np.random.randn(n)  # offset (vector)

    def forward(self, x):  # forward propagation
        self.x = x
        u = np.dot(x, self.w) + self.b
        self.y = u  # Identity function

    def backward(self, t):  # Backpropagation
        delta = self.y - t

        self.grad_w = np.dot(self.x.T, delta)
        self.grad_b = np.sum(delta, axis=0)

        self.grad_x = np.dot(delta, self.w.T)

    def update(self, eta):  # update of weight and bias
        self.w -= eta * self.grad_w
        self.b -= eta * self.grad_b


# - Initialization of each network layer -
middle_layer = MiddleLayer(n_in, n_mid)
output_layer = OutputLayer(n_mid, n_out)

# -- learn --
for i in range(epoch):

    # Randomly scramble the index value
    index_random = np.arange(n_data)
    np.random.shuffle(index_random)

    # Used for the display of results
    total_error = 0
    plot_x = []
    plot_y = []

    for idx in index_random:

        x = input_data[idx:idx + 1]  # input
        t = correct_data[idx:idx + 1]  # correct answer

        # Forward spread
        middle_layer.forward(x.reshape(1, 1))  # Convert the input to a matrix
        output_layer.forward(middle_layer.y)

        # Backpropagation
        output_layer.backward(t.reshape(1, 1))  # Convert the correct answer to a matrix
        middle_layer.backward(output_layer.grad_x)

        # Update of weights and biases
        middle_layer.update(eta)
        output_layer.update(eta)

        if i % interval == 0:
            y = output_layer.y.reshape(-1)  # Restore the matrix to a vector

            # Error calculation
            total_error += 1.0 / 2.0 * np.sum(np.square(y - t))  # Square sum error

            # Output record
            plot_x.append(x)
            plot_y.append(y)

    if i % interval == 0:
        # Display the number of epochs and errors
        print("Epoch:" + str(i) + "/" + str(epoch), "Error:" + str(total_error / n_data))

        # Display the output with a graph
        plt.plot(input_data, correct_data, linestyle="dashed")
        plt.scatter(plot_x, plot_y, marker="+")
        plt.show()

如果增加轮数有效,则模型需要更多训练。

但您可能会过度拟合...请注意,余弦函数是一个周期函数,但您仅使用单调函数(S 形函数和恒等函数)来近似它。

因此,在您的数据的有限区间内,它可能会起作用:

泛化不好:


上述地块的代码:

import math as m
import numpy as np
import matplotlib.pyplot as plt
import sklearn.datasets as datasets

from tensorflow import keras
from tensorflow.keras import layers

t, _ = datasets.make_blobs(n_samples=7500, centers=[[0, 0]], cluster_std=1, random_state=0)
X = np.array(list(filter(lambda x : m.cos(4*x[0]) - x[1] < -.5 or m.cos(4*x[0]) - x[1] > .5, t)))
Y = np.array([1 if m.cos(4*x[0]) - x[1] >= 0 else LABEL for x in X])

model = keras.models.Sequential()
model.add(layers.Dense(8, input_dim=2, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss="binary_crossentropy")

model.fit(X, Y, batch_size=500, epochs=3000)

# create a mesh to plot in
h = .02  # step size in the mesh
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
meshData = np.c_[xx.ravel(), yy.ravel()]

fig, ax = plt.subplots()
Z = model.predict(meshData)
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, alpha=.3, cmap=plt.cm.Paired)
ax.axis('off')

# Plot also the training points
T = model.predict(X)
T = T.reshape(X[:,0].shape)
ax.scatter(X[:, 0], X[:, 1], color=colors[T].tolist(), s=10, alpha=0.9)
plt.show()

# add duplicate plotting code here to generate second plot
# predicting on data generated from a blob
# with a larger standard deviation