使用 VowpalWabbit 优化 CTR(点击率):如何选择合适的参数?

Optimizing CTR (click through rate) with VowpalWabbit: How to choose suitable parameters?

我正在尝试使用 VowpalWabbit(在本文 vw tutorial 之后)给定设备类型(上下文)来优化某些文章或广告(操作)的点击率。但是,我无法使其可靠地收敛到最佳动作。

我创建了一个最小的工作示例(抱歉长度):

import random
import numpy as np
from matplotlib import pyplot as plt
from vowpalwabbit import pyvw

plt.ion()

action_space = ["article-1", "article-2", "article-3"]

def running_mean(x, N):
    cumsum = np.cumsum(np.insert(x, 0, 0))
    return (cumsum[N:] - cumsum[:-N]) / float(N)

def to_vw_example_format(context, cb_label=None):
    if cb_label is not None:
        chosen_action, cost, prob = cb_label
    example_string = ""
    example_string += "shared |User device={} \n".format(context)
    for action in action_space:
        if cb_label is not None and action == chosen_action:
            example_string += "1:{}:{} ".format(cost, prob)
        example_string += "|Action ad={} \n".format(action)
    # Strip the last newline
    return example_string[:-1]

# definition of problem to solve, playing out the article with highest ctr given a context
context_to_action_ctr = {
    "device-1": {"article-1": 0.05, "article-2": 0.06, "article-3": 0.04},
    "device-2": {"article-1": 0.08, "article-2": 0.07, "article-3": 0.05},
    "device-3": {"article-1": 0.01, "article-2": 0.04, "article-3": 0.09},
    "device-4": {"article-1": 0.04, "article-2": 0.04, "article-3": 0.045},
    "device-5": {"article-1": 0.09, "article-2": 0.01, "article-3": 0.07},
    "device-6": {"article-1": 0.03, "article-2": 0.09, "article-3": 0.04}
}

#vw = f"--cb_explore 3 -q UA -q UU --epsilon 0.1"
vw = f"--cb_explore_adf -q UA -q UU --bag 5 "
#vw = f"--cb_explore_adf -q UA --epsilon 0.2"

actor = pyvw.vw(vw)

random_rewards = []
actor_rewards = []
optimal_rewards = []

for step in range(200000):

    # pick a random context
    device = random.choice(list(context_to_action_ctr.keys()))

    # let vw generate probability distribution
    # action_probabilities = np.array(actor.predict(f"|x device:{device}"))
    action_probabilities = np.array(actor.predict(to_vw_example_format(device)))

    # sample action
    probabilities = action_probabilities / action_probabilities.sum()
    action_idx = np.random.choice(len(probabilities), 1, p=probabilities)[0]
    probability = action_probabilities[action_idx]

    # get reward/regret
    action_to_reward_regret = {
        action: (1, 0) if random.random() < context_to_action_ctr[device][action] else (0, 1) for action in action_space
    }

    actor_action = action_space[action_idx]
    random_action = random.choice(action_space)
    optimal_action = {
        "device-1": "article-2",
        "device-2": "article-1",
        "device-3": "article-3",
        "device-4": "article-3",
        "device-5": "article-1",
        "device-6": "article-2",
    }[device]

    # update statistics
    actor_rewards.append(action_to_reward_regret[actor_action][0])
    random_rewards.append(action_to_reward_regret[random_action][0])
    optimal_rewards.append(action_to_reward_regret[optimal_action][0])

    # learn online
    reward, regret = action_to_reward_regret[actor_action]
    cost = -1 if reward == 1 else 0
    # actor.learn(f"{action_idx+1}:{cost}:{probability} |x device:{device}")
    actor.learn(to_vw_example_format(device, (actor_action, cost, probability)))


    if step % 100 == 0 and step > 1000:
        plt.clf()
        axes = plt.gca()
        plt.title("Reward over time")
        plt.plot(running_mean(actor_rewards, 10000), label=str(vw))
        plt.plot(running_mean(random_rewards, 10000), label="Random actions")
        plt.plot(running_mean(optimal_rewards, 10000), label="Optimal actions")
        plt.legend()
        plt.pause(0.0001)

基本上,有 3 种可能的操作(文章 1-3)和 6 种上下文(设备 1-6),每种组合都有特定的 CTR(点击率)和给定上下文的最佳操作(文章具有最高点击给定设备)。在每次迭代中,都会对随机上下文进行采样,并计算每个动作的 reward/regret。如果奖励为 1(用户点击),则 VowpalWabbit 用于学习的成本为 -1;如果奖励为 0(用户未点击),则为 0。随着时间的推移,该算法应该会为每个设备找到最好的文章。

一些结果(随着时间的推移平均奖励):

问题:

由于点击率比较小,需要大量的播放才能收敛,所以我理解问题的难度。然而,我希望随着时间的推移,算法会找到最优值。

我是否遗漏了 VowpalWabbit 配置中的某些内容?

如果所有策略都一致,那么不带 epsilon 参数的“--bag 5”就可以开始产生零概率。

我稍微修改了您的代码以跟踪 actor.predict 的概率。

第一个图:所有“最佳行动”的预测概率的移动平均值。我们可以看到,对于其中的少数几个,我们实际上保持在 0 左右。 带有决策的 table 的尾部表明所有分布实际上都是 [1, 0, 0] 之类的。所以没有机会从这一点恢复,因为我们基本上已经关闭了探索。

添加少量探索 (--bag 5 --epsilon 0.02) 有助于最终收敛到全局最小值并给出如下图:

学习似乎并不快,但有问题的上下文实际上是最模棱两可的,不会造成太大的遗憾。