用keras近似行列式
Approximating determinant with keras
我正在训练一个 keras 密集模型来近似 2x2 矩阵的行列式。我使用了 30 个隐藏层,每个隐藏层有 100 个节点和 10E6 个矩阵(条目在区间 [0,100[] 中)。在对测试集(占总数的 33.3%)进行预测后,我计算了 MSE 的平方根,得到的结果通常不大于 100。我认为这是一个相当高的错误(尽管我不确定什么可以被认为是好的在这种情况下是错误的),但除了增加样本数量外,我不确定如何改进它(10E6 已经是一个很大的数字了)。我希望有人可以提供一些建议。这是代码:
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
### Select number of samples, matrix size and range of entries in matrices
nb_samples = 1000000
matrix_size = 2
entries_range = 100
### Generate random matrices and determinants
matrices = []
determinants = []
for i in range(nb_samples):
matrix = np.random.randint(entries_range, size = (matrix_size,matrix_size))
matrices.append(matrix.reshape(matrix_size**2,))
determinants.append(np.array(np.linalg.det(matrix)).reshape(1,))
matrices = np.array(matrices)
determinants = np.array(determinants)
### Split the data
matrices_train, matrices_test, determinants_train, determinants_test = train_test_split(matrices,determinants,train_size = 0.66)
### Select number of layers and neurons
nb_layers = 30
nb_neurons = 100
### Create dense neural network with nb_layers hidden layers having nb_neurons neurons each
model = Sequential()
model.add(Dense(nb_neurons, input_dim = matrix_size**2, activation='relu'))
for i in range(nb_layers):
model.add(Dense(nb_neurons, activation='relu'))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
model.fit(matrices_train, determinants_train, epochs = 10, batch_size = 100, verbose = 0)
#_ , test_acc = model.evaluate(matrices_test,determinants_test)
#print(test_acc)
### Make a prediction on the test set
determinants_pred = model.predict(matrices_test)
print('''
RMSE: {}
Number of layers: {}
Number of neurons: {}
Number of samples: {}
'''.format(np.sqrt(mean_squared_error(determinants_test,determinants_pred)),nb_layers,nb_neurons,nb_samples))
这是一个输出:
- 均方根误差:20.429616387932295
- 层数:32
- 神经元数量:32
- 样本数:1000000
注意:我决定通过反复试验在每个层中使用 30 层和 100 个节点(MSE 似乎是这些值附近的最低值)。
我认为你的网络对于问题的规模来说是巨大的(输入 dim = 4 输出 = 1)并且你没有足够的 epochs。
我们也可以在这里作弊,因为我们知道计算基本上可以用输入的线性组合的平方来表示,我们可以使用 x*x 自定义激活函数。这里举个例子,10个神经元,1个隐藏层,自定义激活函数同上,epochs = 1000, nsamples = 10000, produces
RMSE: 0.04413008355924881
Number of layers: 1
Number of neurons: 10
Number of samples: 10000
这是我稍作修改后的完整代码
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
### Select number of samples, matrix size and range of entries in matrices
nb_samples = 10000#00
matrix_size = 2
entries_range = 100
### Generate random matrices and determinants
matrices = []
determinants = []
for i in range(nb_samples):
matrix = np.random.randint(entries_range, size = (matrix_size,matrix_size))
matrices.append(matrix.reshape(matrix_size**2,))
determinants.append(np.array(np.linalg.det(matrix)).reshape(1,))
matrices = np.array(matrices)
determinants = np.array(determinants)
### Split the data
matrices_train, matrices_test, determinants_train, determinants_test = train_test_split(matrices,determinants,train_size = 0.66)
### Select number of layers and neurons
nb_layers = 1#30
nb_neurons = 10#0
### Create dense neural network with nb_layers hidden layers having nb_neurons neurons each
model = Sequential()
model.add(Dense(nb_neurons, input_dim = matrix_size**2, activation=lambda x:x*x))
#for i in range(nb_layers):
# model.add(Dense(nb_neurons, activation='relu'))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
model.fit(matrices_train, determinants_train, epochs = 1000, batch_size = 100, verbose = 1)
#_ , test_acc = model.evaluate(matrices_test,determinants_test)
#print(test_acc)
### Make a prediction on the test set
determinants_pred = model.predict(matrices_test)
print('''
RMSE: {}
Number of layers: {}
Number of neurons: {}
Number of samples: {}
'''.format(np.sqrt(mean_squared_error(determinants_test,determinants_pred)),nb_layers,nb_neurons,nb_samples))
我正在训练一个 keras 密集模型来近似 2x2 矩阵的行列式。我使用了 30 个隐藏层,每个隐藏层有 100 个节点和 10E6 个矩阵(条目在区间 [0,100[] 中)。在对测试集(占总数的 33.3%)进行预测后,我计算了 MSE 的平方根,得到的结果通常不大于 100。我认为这是一个相当高的错误(尽管我不确定什么可以被认为是好的在这种情况下是错误的),但除了增加样本数量外,我不确定如何改进它(10E6 已经是一个很大的数字了)。我希望有人可以提供一些建议。这是代码:
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
### Select number of samples, matrix size and range of entries in matrices
nb_samples = 1000000
matrix_size = 2
entries_range = 100
### Generate random matrices and determinants
matrices = []
determinants = []
for i in range(nb_samples):
matrix = np.random.randint(entries_range, size = (matrix_size,matrix_size))
matrices.append(matrix.reshape(matrix_size**2,))
determinants.append(np.array(np.linalg.det(matrix)).reshape(1,))
matrices = np.array(matrices)
determinants = np.array(determinants)
### Split the data
matrices_train, matrices_test, determinants_train, determinants_test = train_test_split(matrices,determinants,train_size = 0.66)
### Select number of layers and neurons
nb_layers = 30
nb_neurons = 100
### Create dense neural network with nb_layers hidden layers having nb_neurons neurons each
model = Sequential()
model.add(Dense(nb_neurons, input_dim = matrix_size**2, activation='relu'))
for i in range(nb_layers):
model.add(Dense(nb_neurons, activation='relu'))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
model.fit(matrices_train, determinants_train, epochs = 10, batch_size = 100, verbose = 0)
#_ , test_acc = model.evaluate(matrices_test,determinants_test)
#print(test_acc)
### Make a prediction on the test set
determinants_pred = model.predict(matrices_test)
print('''
RMSE: {}
Number of layers: {}
Number of neurons: {}
Number of samples: {}
'''.format(np.sqrt(mean_squared_error(determinants_test,determinants_pred)),nb_layers,nb_neurons,nb_samples))
这是一个输出:
- 均方根误差:20.429616387932295
- 层数:32
- 神经元数量:32
- 样本数:1000000
注意:我决定通过反复试验在每个层中使用 30 层和 100 个节点(MSE 似乎是这些值附近的最低值)。
我认为你的网络对于问题的规模来说是巨大的(输入 dim = 4 输出 = 1)并且你没有足够的 epochs。
我们也可以在这里作弊,因为我们知道计算基本上可以用输入的线性组合的平方来表示,我们可以使用 x*x 自定义激活函数。这里举个例子,10个神经元,1个隐藏层,自定义激活函数同上,epochs = 1000, nsamples = 10000, produces
RMSE: 0.04413008355924881
Number of layers: 1
Number of neurons: 10
Number of samples: 10000
这是我稍作修改后的完整代码
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
### Select number of samples, matrix size and range of entries in matrices
nb_samples = 10000#00
matrix_size = 2
entries_range = 100
### Generate random matrices and determinants
matrices = []
determinants = []
for i in range(nb_samples):
matrix = np.random.randint(entries_range, size = (matrix_size,matrix_size))
matrices.append(matrix.reshape(matrix_size**2,))
determinants.append(np.array(np.linalg.det(matrix)).reshape(1,))
matrices = np.array(matrices)
determinants = np.array(determinants)
### Split the data
matrices_train, matrices_test, determinants_train, determinants_test = train_test_split(matrices,determinants,train_size = 0.66)
### Select number of layers and neurons
nb_layers = 1#30
nb_neurons = 10#0
### Create dense neural network with nb_layers hidden layers having nb_neurons neurons each
model = Sequential()
model.add(Dense(nb_neurons, input_dim = matrix_size**2, activation=lambda x:x*x))
#for i in range(nb_layers):
# model.add(Dense(nb_neurons, activation='relu'))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
model.fit(matrices_train, determinants_train, epochs = 1000, batch_size = 100, verbose = 1)
#_ , test_acc = model.evaluate(matrices_test,determinants_test)
#print(test_acc)
### Make a prediction on the test set
determinants_pred = model.predict(matrices_test)
print('''
RMSE: {}
Number of layers: {}
Number of neurons: {}
Number of samples: {}
'''.format(np.sqrt(mean_squared_error(determinants_test,determinants_pred)),nb_layers,nb_neurons,nb_samples))