每个张量组的 Keras 自定义损失函数

Keras custom loss function per tensor group

我正在编写一个自定义损失函数,它需要计算每组预测值的比率。作为 简化的 示例,我的数据和模型代码如下所示:

def main():
    df = pd.DataFrame(columns=["feature_1", "feature_2", "condition_1", "condition_2", "label"],
                      data=[[5, 10, "a", "1", 0],
                            [30, 20, "a", "1", 1],
                            [50, 40, "a", "1", 0],
                            [15, 20, "a", "2", 0],
                            [25, 30, "b", "2", 1],
                            [35, 40, "b", "1", 0],
                            [10, 80, "b", "1", 1]])
    features = ["feature_1", "feature_2"]
    conds_and_label = ["condition_1", "condition_2", "label"]
    X = df[features]
    Y = df[conds_and_label]
    model = my_model(input_shape=len(features))
    model.fit(X, Y, epochs=10, batch_size=128)
    model.evaluate(X, Y)


def custom_loss(conditions, y_pred):  # this is what I need help with
    conds = ["condition_1", "condition_2"]
    conditions["label_pred"] = y_pred
    g = conditions.groupby(by=conds,
                           as_index=False).apply(lambda x: x["label_pred"].sum() /
                                                           len(x)).reset_index(name="pred_ratio")
    # true_ratios will be a constant, external DataFrame. Simplified example here:
    true_ratios = pd.DataFrame(columns=["condition_1", "condition_2", "true_ratio"],
                               data=[["a", "1", 0.1],
                                     ["a", "2", 0.2],
                                     ["b", "1", 0.8],
                                     ["b", "2", 0.9]])
    merged = pd.merge(g, true_ratios, on=conds)
    merged["diff"] = merged["pred_ratio"] - merged["true_ratio"]
    return K.mean(K.abs(merged["diff"]))


def joint_loss(conds_and_label, y_pred):
    y_true = conds_and_label[:, 2]
    conditions = tf.gather(conds_and_label, [0, 1], axis=1)
    loss_1 = standard_loss(y_true=y_true, y_pred=y_pred)  # not shown
    loss_2 = custom_loss(conditions=conditions, y_pred=y_pred)
    return 0.5 * loss_1 + 0.5 * loss_2


def my_model(input_shape=None):
    model = Sequential()
    model.add(Dense(units=2, activation="relu"), input_shape=(input_shape,))
    model.add(Dense(units=1, activation='sigmoid'))
    model.add(Flatten())
    model.compile(loss=joint_loss, optimizer="Adam",
                  metrics=[joint_loss, custom_loss, "accuracy"])
    return model

我需要帮助的是 custom_loss 函数。如您所见,它目前的编写方式好像输入是 Pandas DataFrames。但是,输入将是 Keras Tensors(带有 tensorflow 后端),所以我想弄清楚 如何将 custom_loss 中的当前代码转换为使用 Keras/TF 后端函数.例如,我在网上搜索并找不到在 Keras/TF 中进行 groupby 以获得我需要的比率的方法...

一些context/explanation可能对您有帮助:

  1. 我的主要损失函数是joint_loss,它由standard_loss(未显示)和custom_loss组成。但我只需要帮助转换 custom_loss.
  2. custom_loss 所做的是:
    1. 在两个条件列上进行Groupby(这两列代表数据的组)。
    2. 获取每组预测的 1 与批次样本总数的比率。
    3. 将“pred_ratio”与一组“true_ratio”进行比较并找出差异。
    4. 根据差异计算平均绝对误差。

我最终想出了一个解决方案,尽管我想要一些反馈(特别是某些部分)。这是解决方案:

import pandas as pd
import tensorflow as tf
import keras.backend as K
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout
from tensorflow.python.ops import gen_array_ops


def main():
    df = pd.DataFrame(columns=["feature_1", "feature_2", "condition_1", "condition_2", "label"],
                      data=[[5, 10, "a", "1", 0],
                            [30, 20, "a", "1", 1],
                            [50, 40, "a", "1", 0],
                            [15, 20, "a", "2", 0],
                            [25, 30, "b", "2", 1],
                            [35, 40, "b", "1", 0],
                            [10, 80, "b", "1", 1]])
    df = pd.concat([df] * 500)  # making data artificially larger
    true_ratios = pd.DataFrame(columns=["condition_1", "condition_2", "true_ratio"],
                               data=[["a", "1", 0.1],
                                     ["a", "2", 0.2],
                                     ["b", "1", 0.8],
                                     ["b", "2", 0.9]])
    features = ["feature_1", "feature_2"]
    conditions = ["condition_1", "condition_2"]
    conds_ratios_label = conditions + ["true_ratio", "label"]
    df = pd.merge(df, true_ratios, on=conditions, how="left")
    X = df[features]
    Y = df[conds_ratios_label]
    # need to convert strings to ints because tensors can't mix strings with floats/ints
    mapping_1 = {"a": 1, "b": 2}
    mapping_2 = {"1": 1, "2": 2}
    Y.replace({"condition_1": mapping_1}, inplace=True)
    Y.replace({"condition_2": mapping_2}, inplace=True)
    X = tf.convert_to_tensor(X)
    Y = tf.convert_to_tensor(Y)
    model = my_model(input_shape=len(features))
    model.fit(X, Y, epochs=1, batch_size=64)
    print()
    print(model.evaluate(X, Y))


def custom_loss(conditions, true_ratios, y_pred):
    y_pred = tf.sigmoid((y_pred - 0.5) * 1000)
    uniques, idx, count = gen_array_ops.unique_with_counts_v2(conditions, [0])
    num_unique = tf.size(count)
    sums = tf.math.unsorted_segment_sum(data=y_pred, segment_ids=idx, num_segments=num_unique)
    lengths = tf.cast(count, tf.float32)
    pred_ratios = tf.divide(sums, lengths)
    mean_pred_ratios = tf.math.reduce_mean(pred_ratios)
    mean_true_ratios = tf.math.reduce_mean(true_ratios)
    diff = mean_pred_ratios - mean_true_ratios
    return K.mean(K.abs(diff))


def standard_loss(y_true, y_pred):
    return tf.losses.binary_crossentropy(y_true=y_true, y_pred=y_pred)


def joint_loss(conds_ratios_label, y_pred):
    y_true = conds_ratios_label[:, 3]
    true_ratios = conds_ratios_label[:, 2]
    conditions = tf.gather(conds_ratios_label, [0, 1], axis=1)
    loss_1 = standard_loss(y_true=y_true, y_pred=y_pred)
    loss_2 = custom_loss(conditions=conditions, true_ratios=true_ratios, y_pred=y_pred)
    return 0.5 * loss_1 + 0.5 * loss_2


def my_model(input_shape=None):
    model = Sequential()
    model.add(Dropout(0, input_shape=(input_shape,)))
    model.add(Dense(units=2, activation="relu"))
    model.add(Dense(units=1, activation='sigmoid'))
    model.add(Flatten())
    model.compile(loss=joint_loss, optimizer="Adam",
                  metrics=[joint_loss, "accuracy"],  # had to remove custom_loss because it takes 3 args now
                  run_eagerly=True)
    return model


if __name__ == '__main__':
    main()

主要更新是custom_loss。我删除了从 custom_loss 创建 true_ratios DataFrame,而是将其附加到 main 中的 Y。现在 custom_loss 有 3 个参数,其中之一是 true_ratios 张量。我不得不使用 gen_array_ops.unique_with_counts_v2unsorted_segment_sum 来获取每组条件的总和。然后我得到每个组的长度以创建 pred_ratios(根据 y_pred 计算每组的比率)。最后我得到平均预测比率和平均真实比率,并取绝对差值来得到我的自定义损失。

一些注意事项:

  1. 因为我模型的最后一层是 sigmoid,所以我的 y_pred 值是 0 和 1 之间的概率。所以我需要将它们转换为 0 和 1 以计算我需要的比率自定义损失。起初我尝试使用 tf.round,但我意识到这是不可微分的。因此,我将其替换为 custom_loss 内的 y_pred = tf.sigmoid((y_pred - 0.5) * 1000)。这实质上将所有 y_pred 值都设为 0 和 1,但采用的是可微分的方式。不过这似乎有点“hack”,所以如果您对此有任何反馈,请告诉我。
  2. 我注意到只有在 model.compile() 中使用 run_eagerly=True 时我的模型才有效。否则我会得到这个错误:“ValueError:维度必须相等,但对于...来说是 1 和 2”。我不确定为什么会这样,但错误源自我使用 tf.unsorted_segment_sum.
  3. 的行
  4. unique_with_counts_v2在tensorflowAPI中实际上还不存在,但在源代码中存在。我需要它能够按多个条件(而不仅仅是一个条件)进行分组。

如果您对此有任何反馈,一般而言,或对上述项目符号的回应,请随时发表评论。