PyTorch 损失函数引用模型参数

PyTorch loss function referencing model parameters

对于一项作业,我需要创建一个使用提供的损失函数的电影推荐系统:

sum(i=1,M) sum(j=1,M) indicator[i̸=j](viT vj − Xi,j )**2

这意味着,两个电影嵌入 Vi 和 Vj 之间的点积应该非常接近 Xi,j。其中 Xi,j 是同时喜欢电影 i 和电影 j 的用户的总和。指标函数省略了 i == j 的条目(设置为 0。)

分配的可交付成果是来自隐藏层的权重矩阵。它的尺寸应该是 9724x300,其中有 9724 个唯一的电影 ID 和 300 个神经元。 300 是一个任意选择,受 Google 的 word2vec.

中使用 300 个神经元的影响

我有:

我被困在哪里:

在您进一步阅读之前,请注意,在您的作业中从 Whosebug 寻求和接受直接帮助可能违反您学校的规定,并会给您作为学生带来后果!

也就是说,我对这个问题建模的方式如下:

import torch

U = 300 # number of users
M = 30  # number of movies
D = 4   # dimension of embedding vectors

source = torch.randint(0, 2, (U, M)) # users' ratings
X = source.transpose(0, 1) @ source  # your `preprocessed_data`

# initial values for your embedding. This is what your algorithm needs to learn
v = torch.randn(M, D, requires_grad=True)
X = X.to(torch.float32) # necessary to be in line with `v`

# this is the `(viT vj − Xi,j )**2` part
loss_elementwise = (v @ v.transpose(0, 1) - X).pow(2)

# now we need to get rid of the diagonal. Notice that we can equally
# well get rid of the diagonal and the whole upper triangular part,
# as well, since both V @ V.T and source.T @ source are symmetric, so
# the upper triangular part contains just
# a mirror reflection of the lower triangular part.
# This means that we actually implement a bit different summation:
# sum(i=1,M) sum(j=1,i-1) stuff(i, j)
# instead of
# sum(i=1,M) sum(j=1,M) indicator[i̸=j] stuff(i, j)
# and get exactly half the original value
masked = torch.tril(loss_elementwise, -1)

# finally we sum it up, multiplying by 2 to make up
# for the "lost" upper triangular part
loss = 2 * masked.sum()

现在剩下要实现的是优化循环,它将使用 loss 的梯度来优化 v 的值。