神经网络不适合边界

neural network doesn't fit boundaries

我是机器学习的新手,正在尝试使用 tensorflow 在 python 中用神经网络拟合样本数据集。在 Dymola 中实现神经网络后,我想将函数的输出与神经网络的输出进行比较。

样本数据集为:

import tensorflow as tf
from keras import metrics
import numpy as np
from keras.models import *
from keras.layers import Dense, Dropout
from keras import optimizers
from keras.callbacks import *
import scipy.io as sio
import mat4py as m4p


inputs = np.linspace(0, 15, num=3000)
outputs = 1/7 * ((inputs/5)^3 - (inputs/3)^2 + 5)

然后将输入和输出缩放到区间 [0; 0.9]:

inputs_max = np.max(inputs)
inputs_min = np.min(inputs)
outputs_max = np.max(outputs)
outputs_min = np.min(outputs)

upper_bound = 0.9
lower_bound = 0

m_in = (upper_bound - lower_bound) / (inputs_max - inputs_min)
c_in = upper_bound - (m_in * inputs_max)
scaled_in = m_in * inputs + c_in

m_out = (upper_bound - lower_bound) / (outputs_max - outputs_min)
c_out = upper_bound - (m_out * outputs_max)
scaled_out = m_in * inputs + c_in

然后神经网络训练:

# shuffle values

def shuffle_in_unison(a, b):
assert len(a) == len(b)
shuffled_a = np.empty(a.shape, dtype=a.dtype)
shuffled_b = np.empty(b.shape, dtype=b.dtype)
permutation = np.random.permutation(len(a))
for old_index, new_index in enumerate(permutation):
    shuffled_a[new_index] = a[old_index]
    shuffled_b[new_index] = b[old_index]
return shuffled_a, shuffled_b

tf_features_64 = scaled_in
tf_labels_64 = scaled_out
tf_features_32 = tf_features_64.astype(np.float32)
tf_labels_32 = tf_labels_64.astype(np.float32)

X = tf_features_32
Y = tf_labels_32

shuffle_in_unison(X, Y)


# define callbacks

filepath = "weights-improvement-{epoch:02d}-{val_loss:.2f}.hdf5"

savebestCallBack = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, 
save_best_only=True, save_weights_only=False, mode='auto', period=1)

tbCallBack = TensorBoard(log_dir='./Graph', 
histogram_freq=5, 
write_graph=True, 
write_images=True)

esCallback = EarlyStopping(monitor='val_loss',
                           min_delta=0,
                           patience=500,
                           verbose=0,
                           mode='min')


# neural network architecture

visible = Input(shape=(1,)) 
x = Dense(40, activation='tanh')(visible) 
x = Dense(39, activation='tanh')(x) 
x = Dense(38, activation='tanh')(x) 
x = Dense(30, activation='tanh')(x) 
output = Dense(1)(x)


# setup optimizer

Optimizer = optimizers.adam(lr=0.0007, amsgrad=True)

model = Model(inputs=visible, outputs=output) 

model.compile(optimizer=Optimizer,
                  loss=['mse'],
                  metrics=['mae', 'mse']
                  ) 
model.fit(X, Y, epochs=1000, batch_size=1, verbose=1, 
          shuffle=True, validation_split=0.05, callbacks=[tbCallBack, esCallback])


# return weights

weights1 = model.layers[1].get_weights()[0]
biases1 = model.layers[1].get_weights()[1]
print('Layer1---------------------------------------------------------------------------------------------------------')
print('weights1:')
print(repr(weights1.transpose()))
print('biases1:')
print(repr(biases1))
w1 = weights1.transpose()
b1 = biases1.transpose()
we1 = {'w1' : w1.tolist()}
bi1 = {'b1' : b1.tolist()}
.........
......

后来,我在程序"Dymola"中实现了经过训练的神经网络,方法是在预先配置的"neural network base classes"中加载权重和偏差(已经使用了几次并且正在运行)。

// Modelica code for Dymola:

Real inputs;
Real outputs;
Real scaled_outputs;
Real scaled_inputs(start=0);
Real scaled_outputsfunc;
der(scaled_inputs) = 0.9;


//part of the neural network implementation in Dymola

NeuralNetwork.BaseClasses.NeuralNetworkLayer neuralNetworkLayer1(
NeuronActivationFunction=NeuralNetwork.Types.ActivationFunction.TanSig,
numInputs=1,
numNeurons=40,
weightTable=[-0.367953330278397; ......])
annotation (Placement(transformation(extent={{-76,22},{-56,42}})));

//scaled inputs
neuralNetworkLayer1.u[1] = scaled_inputs;

//scaled outputs
neuralNetworkLayer5.y[1]= scaled_outputs;

//scaled_inputs = 0.06 * inputs
inputs = 1/0.06 * (scaled_inputs);

outputs = 1/875 * inputs^3 - 1/63 * inputs^2 + 5/7;

scaled_outputsfunc = 1.2173139581825052 * outputs - 0.3173139581825052;

在绘制和比较函数的缩放输出和神经网络的返回(缩放)值时,我注意到在 [0.5; 的区间内,近似值非常好; 0.8],但输入越接近边界,近似值就越差。

不幸的是,我不知道为什么会这样,也不知道如何解决这个问题。如果有人能帮助我,我会很高兴。

我想回答我自己的问题:我忘记在我的 python 代码中指定输出层的激活函数,Keras 然后默认设置为线性函数,另请参阅:

https://keras.io/layers/core/

在实现我的 ANN 的 Dymola 中,'tanh' 是最后一层的激活函数,这导致了边界附近的发散。

此应用程序的正确 python 代码必须是:

visible = Input(shape=(1,)) 
x = Dense(40, activation='tanh')(visible) 
x = Dense(39, activation='tanh')(x) 
x = Dense(38, activation='tanh')(x) 
x = Dense(30, activation='tanh')(x) 
output = Dense(1, activation='tanh')(x)