在 Keras 中重新初始化自定义损失函数中的变量

Re-initialize variables in customized loss function in Keras

我自定义了一个正则化器my_reg,它的损失函数涉及一个变量z

def __call__(loss):
    z = K.variable(value=w)  # I need z to be initialized every time
    z = K.print_tensor(z, message='time1: ')

    # BELOW: SOME COMPUTATION THAT WILL RANDOMLY UPDATE z
    n_freeze = SOME_FIXED_VALUE
    idx = tf.range(tf.shape(z)[0])
    random_choice = tf.random_shuffle(idx)[:n_freeze]
    z = K.variable(z)
    z = tf.scatter_update(z, random_choice, np.zeros((n_freeze, x_cols)))
    # ABOVE: SOME COMPUTATION THAT WILL RANDOMLY UPDATE z

    z = K.print_tensor(z, message='time2: ')

    regularized_loss += #some computation involves z#

    z = K.print_tensor(z, message='time3: ')

希望每次调用损失函数时都能重新初始化z,即每次z=K.variable(value=w)都需要运行。但是,每次打印输出只涉及time2:,而z似乎每次都没有重新初始化。我怎样才能做到这一点?


编辑:包括初始化 函数:

class my_reg(Regularizer):
    def __init__(self, scale, gamma, b, cnn=False, detector=None, test=True, batch_size=50):
        self.scale = scale
        self.gamma = gamma
        self.b = b
        self.cnn = cnn
        self.w = None
        self.p = None
        self.detector = detector
        self.test = test
        self.batch_size = batch_size

        # training flag
        self.uses_learning_phase = True

        # whether or not use secret_X dropout
        self.dropout = 0.25

不要update。只需创建新的张量:

zero_probability = n_freeze/tf.shape(z)[0]

drop = tf.random.uniform(tf.shape(z)[:1])   
drop = tf.cast(tf.greater(drop, zero_probability), tf.float32)

z = drop * z