如何在 TensorFlow 中锁定张量的特定值?
How to lock specific values of a Tensor in TensorFlow?
我正在尝试将彩票假设应用于用 TensorFlow 2.0(使用 Keras 界面)编写的简单神经网络,如下所示:
net = models.Sequential()
net.add(layers.Dense(256, activation="softsign", name="Dense0", bias_initializer="ones"))
net.add(layers.Dense(128, activation="softsign", name="Dense1", bias_initializer="ones"))
net.add(layers.Dense(64, activation="softsign", name="Dense2", bias_initializer="ones"))
net.add(layers.Dense(32, activation="softsign", name="Dense3", bias_initializer="ones"))
net.add(layers.Dense(1, activation="tanh", name="Output", bias_initializer="ones"))
然后我使用 Adam 优化器和二元交叉熵损失训练我的网络:
net.compile(optimizer=optimizers.Adam(learning_rate=0.001),
loss=losses.BinaryCrossentropy(), metrics=["accuracy"])
net.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_test, y_test))
训练过程结束后,我想在我的网络中锁定特定的权重。问题是,我只能使用 tensorflow.Variable(..., trainable=False)
将 Tensor 锁定为不可训练的(据我所知),但通过这样做我将我的图形的整个节点设置为不可训练的,我想要只有特定的边缘。我可以使用以下代码遍历网络的所有 Tensor 实例:
for i in range(len(net.layers)):
for j in range(net.layers[i].variables[0].shape[0]):
for k in range(net.layers[i].variables[0][j].shape[0]):
...
但我不知道接下来要做什么。有人知道我可以做到这一点的简单方法吗?
也许你可以将 Dense 层子类化?像
class PrunableDense(keras.layers.Dense):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.deleted_channels = None
self.deleted_bias = None
self._kernel=None
self._bias=None
def build(self, input_shape):
last_dim = input_shape[-1]
self._kernel = self.add_weight(
'kernel',
shape=[last_dim, self.units],
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint,
dtype=self.dtype,
trainable=True)
self.deleted_channels = tf.ones([last_dim, self.units]) # we'll use this to prune the network
if self.use_bias:
self._bias = self.add_weight(
'bias',
shape=[self.units,],
initializer=self.bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint,
dtype=self.dtype,
trainable=True)
self.deleted_bias = tf.ones([self.units,])
@property
def kernel(self):
"""gets called whenever self.kernel is used"""
# only the weights that haven't been deleted should be non-zero
# deleted weights are 0.'s in self.deleted_channels
return self.deleted_channels * self._kernel
@property
def bias(self):
#similar to kernel
if not self.use_bias:
return None
else:
return self.deleted_bias * self._bias
def prune_kernel(self, to_be_deleted):
"""
Delete some channels
to_be_deleted should be a tensor or numpy array of shape kernel.shape
containing 1's at the locations where weights should be kept, and 0's
at the locations where weights should be deleted.
"""
self.deleted_channels *= to_be_deleted
def prune_bias(self, to_be_deleted):
assert(self.use_bias)
self.deleted_bias *= to_be_deleted
def prune_kernel_below_threshold(self, threshold=0.01):
to_be_deleted = tf.cast(tf.greater(self.kernel, threshold), tf.float32)
self.deleted_channels *= to_be_deleted
def prune_bias_below_threshold(self, threshold=0.01):
assert(self.use_bias)
to_be_deleted = tf.cast(tf.greater(self.bias, threshold), tf.float32)
self.deleted_bias *= to_be_deleted
我还没有对此进行过广泛的测试,它肯定需要一些改进,但我认为这个想法应该可行。
编辑:我写上面的假设你想像彩票假设中那样修剪网络,但如果你只想冻结部分权重,你可以做类似的事情,但添加 frozen_kernel 仅在 self.deleted_channels 为 0 的地方添加非零条目的属性,并将其添加到可训练内核中。
编辑 2:在之前的编辑中,我的意思如下:
class FreezableDense(keras.layers.Dense):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.trainable_channels = None
self.trainable_bias = None
self._kernel1 = None
self._bias1 = None
self._kernel2 = None
self._bias2 = None
def build(self, input_shape):
last_dim = input_shape[-1]
self._kernel1 = self.add_weight(
'kernel1',
shape=[last_dim, self.units],
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint,
dtype=self.dtype,
trainable=True)
self._kernel2 = tf.zeros([last_dim, self.units])
self.trainable_channels = tf.ones([last_dim, self.units]) # we'll use this to freeze parts of the network
if self.use_bias:
self._bias1 = self.add_weight(
'bias',
shape=[self.units,],
initializer=self.bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint,
dtype=self.dtype,
trainable=True)
self._bias2 = tf.zeros([self.units,])
self.trainable_bias = tf.ones([self.units,])
@property
def kernel(self):
"""gets called whenever self.kernel is used"""
# frozen
return self.trainable_channels * self._kernel1 + (1 - self.trainable_channels) * self._kernel2
@property
def bias(self):
#similar to kernel
if not self.use_bias:
return None
else:
return self.trainable_bias * self._bias1 + (1 - self.trainable_bias) * self._bias2
def freeze_kernel(self, to_be_frozen):
"""
freeze some channels
to_be_frozen should be a tensor or numpy array of shape kernel.shape
containing 1's at the locations where weights should be kept trainable, and 0's
at the locations where weights should be frozen.
"""
# we want to do two things: update the weights in self._kernel2
# and update self.trainable_channels
# first we update self._kernel2 with all newly frozen weights
newly_frozen = 1 - tf.maximum((1 - to_be_frozen) - (1 - self.trainable_channels), 0)
# the above should have 0 only where to_be_frozen is 0 and self.trainable_channels is 1
# if I'm not mistaken that is
newly_frozen_weights = (1-newly_frozen)*self._kernel1
self._kernel2 += newly_frozen_weights
# now we update self.trainable_channels:
self.trainable_channels *= to_be_frozen
def prune_bias(self, to_be_deleted):
assert(self.use_bias)
newly_frozen = 1 - tf.maximum((1 - to_be_frozen) - (1 - self.trainable_bias), 0)
newly_frozen_bias = (1-newly_frozen)*self._bias1
self._bias2 += newly_frozen_bias
self.trainable_bias *= to_be_frozen
(再次没有经过精心测试,确实需要一些改进,但我认为这个想法应该可行)
编辑 3:
谷歌搜索更多让我得到了我最初找不到的东西:https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/sparsity/keras migth 提供了更容易构建修剪模型的工具。
编辑4(进一步解释_kernel2和_bias2的作用):
为简单起见,我将在没有偏见的情况下进行解释,但经过必要的修改后,一切都与偏见相同。假设密集层的输入是 n 维,而输出是 m 维,那么密集层所做的就是将输入乘以 m×n 矩阵,我们简称为 K(它是内核) .
通常我们希望通过一些基于梯度的优化方法来学习 K 的正确条目,但在您的情况下,您希望保持某些条目固定。这就是为什么在这个自定义的 Dense 层中,我们按如下方式拆分 K:
K = T * K1 + (1 - T) * K2,
其中
- T 是由 0 和 1 组成的 m×n 矩阵,
- 星号表示元素乘法
- 1 是 m×n 矩阵,每个条目都是 1
- K1是一个可以学习的m×n矩阵
- K2 是一个 m×n 矩阵,在训练期间是固定的(常量)。
如果我们查看K的条目,则K[i,j] = T[i,j]*K1[i,j] + (1-T[i,j])*K2[i ,j] = K1[i,j] 如果 T[i,j]==1 否则 K2[i,j]。由于在后一种情况下,K1[i,j] 的值对乘以 K 的结果没有影响,因此它的梯度为 0 并且不应改变(即使它确实由于数值错误而改变,也不应该对 K[i,j] 的值有影响)。
所以本质上,K[i,j] 的项 T[i,j]==0 是固定的(值存储在 K2 中),而那些 T[i,j]= =1 可以训练。
我正在尝试将彩票假设应用于用 TensorFlow 2.0(使用 Keras 界面)编写的简单神经网络,如下所示:
net = models.Sequential()
net.add(layers.Dense(256, activation="softsign", name="Dense0", bias_initializer="ones"))
net.add(layers.Dense(128, activation="softsign", name="Dense1", bias_initializer="ones"))
net.add(layers.Dense(64, activation="softsign", name="Dense2", bias_initializer="ones"))
net.add(layers.Dense(32, activation="softsign", name="Dense3", bias_initializer="ones"))
net.add(layers.Dense(1, activation="tanh", name="Output", bias_initializer="ones"))
然后我使用 Adam 优化器和二元交叉熵损失训练我的网络:
net.compile(optimizer=optimizers.Adam(learning_rate=0.001),
loss=losses.BinaryCrossentropy(), metrics=["accuracy"])
net.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_test, y_test))
训练过程结束后,我想在我的网络中锁定特定的权重。问题是,我只能使用 tensorflow.Variable(..., trainable=False)
将 Tensor 锁定为不可训练的(据我所知),但通过这样做我将我的图形的整个节点设置为不可训练的,我想要只有特定的边缘。我可以使用以下代码遍历网络的所有 Tensor 实例:
for i in range(len(net.layers)):
for j in range(net.layers[i].variables[0].shape[0]):
for k in range(net.layers[i].variables[0][j].shape[0]):
...
但我不知道接下来要做什么。有人知道我可以做到这一点的简单方法吗?
也许你可以将 Dense 层子类化?像
class PrunableDense(keras.layers.Dense):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.deleted_channels = None
self.deleted_bias = None
self._kernel=None
self._bias=None
def build(self, input_shape):
last_dim = input_shape[-1]
self._kernel = self.add_weight(
'kernel',
shape=[last_dim, self.units],
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint,
dtype=self.dtype,
trainable=True)
self.deleted_channels = tf.ones([last_dim, self.units]) # we'll use this to prune the network
if self.use_bias:
self._bias = self.add_weight(
'bias',
shape=[self.units,],
initializer=self.bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint,
dtype=self.dtype,
trainable=True)
self.deleted_bias = tf.ones([self.units,])
@property
def kernel(self):
"""gets called whenever self.kernel is used"""
# only the weights that haven't been deleted should be non-zero
# deleted weights are 0.'s in self.deleted_channels
return self.deleted_channels * self._kernel
@property
def bias(self):
#similar to kernel
if not self.use_bias:
return None
else:
return self.deleted_bias * self._bias
def prune_kernel(self, to_be_deleted):
"""
Delete some channels
to_be_deleted should be a tensor or numpy array of shape kernel.shape
containing 1's at the locations where weights should be kept, and 0's
at the locations where weights should be deleted.
"""
self.deleted_channels *= to_be_deleted
def prune_bias(self, to_be_deleted):
assert(self.use_bias)
self.deleted_bias *= to_be_deleted
def prune_kernel_below_threshold(self, threshold=0.01):
to_be_deleted = tf.cast(tf.greater(self.kernel, threshold), tf.float32)
self.deleted_channels *= to_be_deleted
def prune_bias_below_threshold(self, threshold=0.01):
assert(self.use_bias)
to_be_deleted = tf.cast(tf.greater(self.bias, threshold), tf.float32)
self.deleted_bias *= to_be_deleted
我还没有对此进行过广泛的测试,它肯定需要一些改进,但我认为这个想法应该可行。
编辑:我写上面的假设你想像彩票假设中那样修剪网络,但如果你只想冻结部分权重,你可以做类似的事情,但添加 frozen_kernel 仅在 self.deleted_channels 为 0 的地方添加非零条目的属性,并将其添加到可训练内核中。
编辑 2:在之前的编辑中,我的意思如下:
class FreezableDense(keras.layers.Dense):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.trainable_channels = None
self.trainable_bias = None
self._kernel1 = None
self._bias1 = None
self._kernel2 = None
self._bias2 = None
def build(self, input_shape):
last_dim = input_shape[-1]
self._kernel1 = self.add_weight(
'kernel1',
shape=[last_dim, self.units],
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint,
dtype=self.dtype,
trainable=True)
self._kernel2 = tf.zeros([last_dim, self.units])
self.trainable_channels = tf.ones([last_dim, self.units]) # we'll use this to freeze parts of the network
if self.use_bias:
self._bias1 = self.add_weight(
'bias',
shape=[self.units,],
initializer=self.bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint,
dtype=self.dtype,
trainable=True)
self._bias2 = tf.zeros([self.units,])
self.trainable_bias = tf.ones([self.units,])
@property
def kernel(self):
"""gets called whenever self.kernel is used"""
# frozen
return self.trainable_channels * self._kernel1 + (1 - self.trainable_channels) * self._kernel2
@property
def bias(self):
#similar to kernel
if not self.use_bias:
return None
else:
return self.trainable_bias * self._bias1 + (1 - self.trainable_bias) * self._bias2
def freeze_kernel(self, to_be_frozen):
"""
freeze some channels
to_be_frozen should be a tensor or numpy array of shape kernel.shape
containing 1's at the locations where weights should be kept trainable, and 0's
at the locations where weights should be frozen.
"""
# we want to do two things: update the weights in self._kernel2
# and update self.trainable_channels
# first we update self._kernel2 with all newly frozen weights
newly_frozen = 1 - tf.maximum((1 - to_be_frozen) - (1 - self.trainable_channels), 0)
# the above should have 0 only where to_be_frozen is 0 and self.trainable_channels is 1
# if I'm not mistaken that is
newly_frozen_weights = (1-newly_frozen)*self._kernel1
self._kernel2 += newly_frozen_weights
# now we update self.trainable_channels:
self.trainable_channels *= to_be_frozen
def prune_bias(self, to_be_deleted):
assert(self.use_bias)
newly_frozen = 1 - tf.maximum((1 - to_be_frozen) - (1 - self.trainable_bias), 0)
newly_frozen_bias = (1-newly_frozen)*self._bias1
self._bias2 += newly_frozen_bias
self.trainable_bias *= to_be_frozen
(再次没有经过精心测试,确实需要一些改进,但我认为这个想法应该可行)
编辑 3: 谷歌搜索更多让我得到了我最初找不到的东西:https://www.tensorflow.org/model_optimization/api_docs/python/tfmot/sparsity/keras migth 提供了更容易构建修剪模型的工具。
编辑4(进一步解释_kernel2和_bias2的作用):
为简单起见,我将在没有偏见的情况下进行解释,但经过必要的修改后,一切都与偏见相同。假设密集层的输入是 n 维,而输出是 m 维,那么密集层所做的就是将输入乘以 m×n 矩阵,我们简称为 K(它是内核) .
通常我们希望通过一些基于梯度的优化方法来学习 K 的正确条目,但在您的情况下,您希望保持某些条目固定。这就是为什么在这个自定义的 Dense 层中,我们按如下方式拆分 K:
K = T * K1 + (1 - T) * K2,
其中
- T 是由 0 和 1 组成的 m×n 矩阵,
- 星号表示元素乘法
- 1 是 m×n 矩阵,每个条目都是 1
- K1是一个可以学习的m×n矩阵
- K2 是一个 m×n 矩阵,在训练期间是固定的(常量)。
如果我们查看K的条目,则K[i,j] = T[i,j]*K1[i,j] + (1-T[i,j])*K2[i ,j] = K1[i,j] 如果 T[i,j]==1 否则 K2[i,j]。由于在后一种情况下,K1[i,j] 的值对乘以 K 的结果没有影响,因此它的梯度为 0 并且不应改变(即使它确实由于数值错误而改变,也不应该对 K[i,j] 的值有影响)。
所以本质上,K[i,j] 的项 T[i,j]==0 是固定的(值存储在 K2 中),而那些 T[i,j]= =1 可以训练。