使用 TensorFlow 有选择地优化 Keras 模型
Selectively Optimizing a Keras Model with Tensorflow
我正在使用 Tensorflow 和 Keras 创建一个 GAN(生成对抗网络)。出现的问题是当我尝试将生成器训练参数列表传递到训练步骤的 vars_list
时。
我的发电机看起来像
def create_generator(z_noise):
#build layer one
l1 = Dense(h1_size)(z_noise)
L1 = LeakyReLU(0.1)(l1)
#layer 2
l2 = Dense(h2_size)(L1)
L2 = LeakyReLU(0.1)(l2)
#layer 3
l3 = Dense(h3_size)(l2)
#generated data
x_generate = sigmoid(l3)
#params
g_params = [l1, L1, l2, L2, l3]
return x_generate, g_params
然后 x_generate 被传递给仍然用 Tensorflow 编写但尚未转换为 keras 的鉴别器。在我传入优化参数之前,该部分工作正常。
#generate the nets
x_generated, g_params = create_generator(z_prior)
y_data, y_generated, d_params = create_discriminator(x_data, x_generated, keep_prob)
#declare loss functions
d_loss = - (tf.log(y_data) + tf.log(1 - y_generated)) # inverted due to inability to do normal maximization
g_loss = - tf.log(y_generated)
#optimizer
optimizer = tf.train.AdamOptimizer(learning_rate=0.0001)
d_trainer = optimizer.minimize(d_loss, var_list=d_params)
g_trainer = optimizer.minimize(g_loss, var_list=g_params)
结果是一个错误说明
NotImplementedError: ('Trying to update a Tensor ', <tf.Tensor 'dense_4/BiasAdd:0' shape=(256, 20) dtype=float32>)
在线
g_trainer = optimizer.minimize(loss, var_list=g_params)
您使用的是层的激活,而不是 var_list
中那些层中的可训练参数。
尝试如下操作:
def create_generator(z_noise):
with tf.variable_scope('generator', reuse=tf.AUTO_REUSE):
#build layer one
l1 = Dense(h1_size)(z_noise)
L1 = LeakyReLU(0.1)(l1)
#layer 2
l2 = Dense(h2_size)(L1)
L2 = LeakyReLU(0.1)(l2)
#layer 3
l3 = Dense(h3_size)(l2)
#generated data
x_generate = sigmoid(l3)
g_params = tf.get_collection(
tf.GraphKeys.GLOBAL_VARIABLES, scope='generator')
return x_generate, g_params
我正在使用 Tensorflow 和 Keras 创建一个 GAN(生成对抗网络)。出现的问题是当我尝试将生成器训练参数列表传递到训练步骤的 vars_list
时。
我的发电机看起来像
def create_generator(z_noise):
#build layer one
l1 = Dense(h1_size)(z_noise)
L1 = LeakyReLU(0.1)(l1)
#layer 2
l2 = Dense(h2_size)(L1)
L2 = LeakyReLU(0.1)(l2)
#layer 3
l3 = Dense(h3_size)(l2)
#generated data
x_generate = sigmoid(l3)
#params
g_params = [l1, L1, l2, L2, l3]
return x_generate, g_params
然后 x_generate 被传递给仍然用 Tensorflow 编写但尚未转换为 keras 的鉴别器。在我传入优化参数之前,该部分工作正常。
#generate the nets
x_generated, g_params = create_generator(z_prior)
y_data, y_generated, d_params = create_discriminator(x_data, x_generated, keep_prob)
#declare loss functions
d_loss = - (tf.log(y_data) + tf.log(1 - y_generated)) # inverted due to inability to do normal maximization
g_loss = - tf.log(y_generated)
#optimizer
optimizer = tf.train.AdamOptimizer(learning_rate=0.0001)
d_trainer = optimizer.minimize(d_loss, var_list=d_params)
g_trainer = optimizer.minimize(g_loss, var_list=g_params)
结果是一个错误说明
NotImplementedError: ('Trying to update a Tensor ', <tf.Tensor 'dense_4/BiasAdd:0' shape=(256, 20) dtype=float32>)
在线
g_trainer = optimizer.minimize(loss, var_list=g_params)
您使用的是层的激活,而不是 var_list
中那些层中的可训练参数。
尝试如下操作:
def create_generator(z_noise):
with tf.variable_scope('generator', reuse=tf.AUTO_REUSE):
#build layer one
l1 = Dense(h1_size)(z_noise)
L1 = LeakyReLU(0.1)(l1)
#layer 2
l2 = Dense(h2_size)(L1)
L2 = LeakyReLU(0.1)(l2)
#layer 3
l3 = Dense(h3_size)(l2)
#generated data
x_generate = sigmoid(l3)
g_params = tf.get_collection(
tf.GraphKeys.GLOBAL_VARIABLES, scope='generator')
return x_generate, g_params