将元数据传递给自定义损失函数

Passing MetaData to Custom Loss Function

我想创建一个依赖于元数据的自定义损失函数。在最简单的形式中,我想将损失乘以每批重量(由元数据确定)。

为简单起见,考虑直接传递所需的权重。这里有两种损失函数的尝试:

def three_arg_loss(loss_func):
    """ a loss function that takes 3 args"""
    def _loss(target,output,weight):
        return weight*loss_func(target,output)
    return _loss

def target_list_loss(loss_func):
    """ a loss function that expects the target arg to be [target,weight]"""
    def _loss(target,output):
        weight=target[1]
        target=target[0]
        return weight*loss_func(target,output)
    return _loss

当我尝试训练时,我得到了以下信息:

当然,我检查了三遍,确实传递了 3 个参数

再次经过三重检查,我确实传递了 [target,weight] 作为目标参数。我在这里担心我可能弄乱了损失函数参数的顺序所以我翻转它们只是为了确定并得到 ValueError: Shapes (None, None, 4) and (None, None, None, None) are incompatible

想法? correct/best 具有依赖于附加数据(在我的情况下是地理位置)的损失函数的方法是什么?

按照下面的要求,这是一个显示错误的完整(但愚蠢)示例

BATCH_SIZE=2
SIZE=3
STEPS=8
EPOCHS=3
NB_CLASSES=4


def gen_inpt(ch_in):
    return tf.random.uniform((BATCH_SIZE,SIZE,SIZE,ch_in))

def gen_targ(nb_classes):
    t=tf.random.uniform((BATCH_SIZE,SIZE,SIZE),maxval=nb_classes,dtype=tf.int32)
    return tf.keras.utils.to_categorical(t,num_classes=nb_classes)

def gen(ch_in,ch_out):
    return ( ( gen_inpt(ch_in), gen_targ(ch_out) ) for b in range(BATCH_SIZE*STEPS*EPOCHS) )

def gen_targ_list(ch_in,ch_out):
    return ( ( gen_inpt(ch_in), [gen_targ(ch_out), tf.fill(1,2222)] ) for b in range(BATCH_SIZE*STEPS*EPOCHS) )

def gen_3args(ch_in,ch_out):
    return ( ( gen_inpt(ch_in), gen_targ(ch_out), tf.fill(1,10000.0) ) for b in range(BATCH_SIZE*STEPS*EPOCHS) )


class Toy(tf.keras.Model):
    
    def __init__(self,nb_classes):
        super(Toy, self).__init__()
        self.l1=layers.Conv2D(32,3,padding='same')
        self.l2=layers.Conv2D(nb_classes,3,padding='same')
        
    def call(self,x):
        x=self.l1(x)
        x=self.l2(x)
        return x

def test_loss(loss_func):
    def _loss(target,output):
        return loss_func(target,output)
    return _loss


def target_list_loss(loss_func):
    def _loss(target,output):
        weight=target[1]
        target=target[0]
        return weight*loss_func(target,output)
    return _loss


def three_arg_loss(loss_func):
    def _loss(target,output,weight):
        return weight*loss_func(target,output)
    return _loss


loss_func=tf.keras.losses.CategoricalCrossentropy()

loss_test=test_loss(loss_func)
loss_targ_list=target_list_loss(loss_func)
loss_3arg=three_arg_loss(loss_func)

def test_train(loss,gen):
    try: 
        model=Toy(NB_CLASSES)    
        model.compile(optimizer='adam',
                  loss=loss,
                  metrics=['accuracy'])
        model.fit(gen(6,NB_CLASSES),steps_per_epoch=STEPS,epochs=EPOCHS)
    except Exception as e:
        print(e)

#
# RUN TESTS
#
test_train(loss_test,gen)
test_train(loss_targ_list,gen_targ_list)
test_train(loss_3arg,gen_3args)

扩展损失的示例(给出相同的结果)

class TargListLoss(tf.keras.losses.Loss):
    
    def __init__(self,loss_func):
        super(TargListLoss,self).__init__()
        self.loss_func=loss_func
        
    def call(self,target,output):
        weight=target[1]
        target=target[0]
        return weight*self.loss_func(target,output)

样本权重!

我试图构建自定义损失函数,在每个样本的基础上对损失进行加权,但这正是 sample_weights 的目的。

为这个愚蠢的问题向所有人道歉 - 但希望这可以防止其他人重复我的错误。我认为错过了这一点,因为最初我计划通过将元数据直接传递给损失函数来确定权重。回想起来,在你的损失函数中包含元到权重逻辑没有意义,因为它依赖于应用程序。

为了完整起见,下面的代码显示了从生成器传递第三个参数确实是如何对每个样本进行加权的:

BATCH_SIZE=2
SIZE=3
STEPS=8
EPOCHS=3
NB_CLASSES=4


def gen_inpt(ch_in):
    return tf.random.uniform((BATCH_SIZE,SIZE,SIZE,ch_in))

def gen_targ(nb_classes):
    t=tf.random.uniform((BATCH_SIZE,SIZE,SIZE),maxval=nb_classes,dtype=tf.int32)
    return tf.keras.utils.to_categorical(t,num_classes=nb_classes)
        
def gen_3args(ch_in,ch_out,dummy_sw):
    if dummy_sw:
        return ( ( gen_inpt(ch_in), gen_targ(ch_out), tf.convert_to_tensor(dummy_sw) ) for b in range(BATCH_SIZE*STEPS*EPOCHS) )
    else:
        return ( ( gen_inpt(ch_in), gen_targ(ch_out) ) for b in range(BATCH_SIZE*STEPS*EPOCHS) )

    
class Toy(tf.keras.Model):
    
    def __init__(self,nb_classes):
        super(Toy, self).__init__()
        self.l1=layers.Conv2D(32,3,padding='same')
        self.l2=layers.Conv2D(nb_classes,3,padding='same')
        
    def call(self,x):
        x=self.l1(x)
        x=self.l2(x)
        return x
    
loss_func=tf.keras.losses.CategoricalCrossentropy()

def test_train(loss,gen):
    try: 
        model=Toy(NB_CLASSES)    
        model.compile(optimizer='adam',
                  loss='categorical_crossentropy',
                  metrics=['accuracy'])
        model.fit(gen,steps_per_epoch=STEPS,epochs=EPOCHS)
    except Exception as e:
        print(e)

# #
# # RUN TESTS
# #
print('None: unweighted')
test_train(loss_func,gen_3args(6,NB_CLASSES,None))
print('ones: same as None')
test_train(loss_func,gen_3args(6,NB_CLASSES,[1,1]))
print('100s: should be roughly 100 times the loss of None')
test_train(loss_func,gen_3args(6,NB_CLASSES,[100,100]))
print('[0,10]: should be roughly 1/2 the 100s loss ')
test_train(loss_func,gen_3args(6,NB_CLASSES,[0,100]))