(有效地)扩展特征掩码张量以匹配嵌入维度

(Efficiently) expanding a feature mask tensor to match embedding dimensions

我有一个B(批量大小)F(特征数)mask张量 M,我想将其应用(按元素相乘)到输入 x.

...事实是,我的 x 已将其原始特征列转换为非恒定宽度的嵌入,因此其总维度为 B E(嵌入尺寸).

我的代码草稿是这样的:

# Given something like:
M = torch.Tensor([[0.2, 0.8], [0.5, 0.5], [0.6, 0.4]])  # B=3, F=2
x = torch.Tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 0], [11, 12, 13, 14, 15]])  # E=5

feature_sizes = [2, 3]  # (Feature 0 embedded to 2 cols, feature 1 to 3)

# In forward() pass:
components = []
for ix, size in enumerate(feature_sizes):
    components.append(M[:, ix].view(-1, 1).expand(-1, size))
M_x = torch.cat(components, dim=1)

# Now M_x is (B, E) and can be mapped with x

> M_x = torch.Tensor([
>     [0.2, 0.4, 2.4, 3.6, 4],
>     [3, 3.5, 4, 4.5, 0], 
>     [6.6, 7.2, 5.2, 5.6, 6],
> ])

我的问题是:这里是否缺少任何明显的优化? for 循环是正确的方法吗,还是有更直接的方法来实现它?

我可以控制嵌入过程,因此可以存储任何有用的表示,例如不依赖于 feature_sizes 整数列表。

呃,我忘了:索引操作可以做到这一点!

鉴于上述情况(但我会采用更复杂的 feature_sizes 来更清楚地显示一点),我们可以 pre-compute 一个索引张量,例如:

# Given that:
feature_sizes = [1, 3, 1, 2]

# Produce nested list e.g. [[0], [1, 1, 1], [2], [3, 3]]:
ixs_per_feature = [[ix] * size for ix, size in enumerate(feature_sizes)]

# Flatten out into a vector e.g. [0, 1, 1, 1, 2, 3, 3]:
mask_ixs = torch.LongTensor(
    [item for sublist in ixs_per_feature for item in sublist]
)

# Now can directly produce M_x by indexing M:
M_x = M[:, mask_ixs]

通过使用此方法而不是 for 循环,我得到了适度的加速。