有没有办法在训练期间将弱权重归零?例如,如果权重的绝对值低于 .05,则将该权重设置为 0
Is there a way to zero out weak weights during training? For example if the absolute value of a weight is lower than .05 just set that weight to 0
我遇到了麻烦,似乎您无法直接编辑张量或简单地将其转换为 numpy 并在训练期间以该形式编辑它。在某种程度上,我正在寻找的是与 Tensorflow and numpy 中都存在的 clip 函数相反的东西。我不想确保所有值都在 min 和 max 之间,而是想在 min[=20] 之间切换所有值=] 和 max 到 0。可能 min 和 max 将是相同的值,因此它变为归零绝对值小于某个输入值的任何权重。
class DeleteWeakConnectionsDenseLayer(keras.layers.Layer):
def __init__(self, units, weak_threshold, **kwargs):
super(DeleteWeakConnectionsDenseLayer, self).__init__(**kwargs)
self.units = units
self.weak_threshold = weak_threshold
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs, training=False):
if training:
new_weights = #Code Here such that weights whose absolute value is below self.weakthreshold are reassigned to 0
self.w.assign(new_weights) # Assign preserves tf.Variable
else:
pass #could think about multiplying all weights by a constant here
return tf.nn.relu(tf.matmul(inputs, self.w) + self.b)
试试这个代码:
def call(self, inputs, training=False):
if training:
mask = tf.abs(self.w) > self.weak_threshold
new_weights = self.w * tf.cast(mask, tf.float32)
self.w.assign(new_weights) # Assign preserves tf.Variable
else:
pass #could think about multiplying all weights by a constant here
return tf.nn.relu(tf.matmul(inputs, self.w) + self.b)
我遇到了麻烦,似乎您无法直接编辑张量或简单地将其转换为 numpy 并在训练期间以该形式编辑它。在某种程度上,我正在寻找的是与 Tensorflow and numpy 中都存在的 clip 函数相反的东西。我不想确保所有值都在 min 和 max 之间,而是想在 min[=20] 之间切换所有值=] 和 max 到 0。可能 min 和 max 将是相同的值,因此它变为归零绝对值小于某个输入值的任何权重。
class DeleteWeakConnectionsDenseLayer(keras.layers.Layer):
def __init__(self, units, weak_threshold, **kwargs):
super(DeleteWeakConnectionsDenseLayer, self).__init__(**kwargs)
self.units = units
self.weak_threshold = weak_threshold
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs, training=False):
if training:
new_weights = #Code Here such that weights whose absolute value is below self.weakthreshold are reassigned to 0
self.w.assign(new_weights) # Assign preserves tf.Variable
else:
pass #could think about multiplying all weights by a constant here
return tf.nn.relu(tf.matmul(inputs, self.w) + self.b)
试试这个代码:
def call(self, inputs, training=False):
if training:
mask = tf.abs(self.w) > self.weak_threshold
new_weights = self.w * tf.cast(mask, tf.float32)
self.w.assign(new_weights) # Assign preserves tf.Variable
else:
pass #could think about multiplying all weights by a constant here
return tf.nn.relu(tf.matmul(inputs, self.w) + self.b)