如何找到一个向量与矩阵的余弦相似度
How to find cosine similarity of one vector vs matrix
我有一个形状为 (149,1001) 的 TF-IDF 矩阵。想要的是计算最后一列的余弦相似度,所有列
这是我做的
from numpy import dot
from numpy.linalg import norm
for i in range(mat.shape[1]-1):
cos_sim = dot(mat[:,i], mat[:,-1])/(norm(mat[:,i])*norm(mat[:,-1]))
cos_sim
但是这个循环让它变慢了。那么,有什么有效的方法吗?我只想用 numpy
有一个 sklearn 函数可以计算向量之间的余弦相似度,cosine_similarity
。这是一个带有示例数组的用例:
a = np.random.randint(0,10,(5,5))
print(a)
array([[5, 2, 0, 4, 1],
[4, 2, 8, 2, 4],
[9, 7, 4, 9, 7],
[4, 6, 0, 1, 3],
[1, 1, 2, 5, 0]])
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(a[None,:,-1] , a.T[:-1])
# array([[0.94022805, 0.91705665, 0.75592895, 0.79921221, 1. ]])
其中 a[None,-1]
是 a
中的最后一列,重新整形以使两个矩阵的形状相同 Mat.shape[1]
,这是函数的要求:
a[None,:,-1]
# array([[1, 4, 7, 3, 0]])
并且通过转置,结果将是 cosine_similarity
和所有其他列。
检查问题的解决方案:
from numpy import dot
from numpy.linalg import norm
cos_sim = []
for i in range(a.shape[1]-1):
cos_sim.append(dot(a[:,i], a[:,-1])/(norm(a[:,i])*norm(a[:,-1])))
np.allclose(cos_sim, cosine_similarity(a[None,:,-1] , a.T[:-1]))
# True
杠杆2D
向量化matrix-multiplication
这是 NumPy 在二维数据上使用 matrix-multiplication 的一个 -
p1 = mat[:,-1].dot(mat[:,:-1])
p2 = norm(mat[:,:-1],axis=0)*norm(mat[:,-1])
out1 = p1/p2
解释: p1
是 dot(mat[:,i], mat[:,-1])
循环的向量化等价物。 p2
属于 (norm(mat[:,i])*norm(mat[:,-1]))
.
用于验证的样本运行 -
In [57]: np.random.seed(0)
...: mat = np.random.rand(149,1001)
In [58]: out = np.empty(mat.shape[1]-1)
...: for i in range(mat.shape[1]-1):
...: out[i] = dot(mat[:,i], mat[:,-1])/(norm(mat[:,i])*norm(mat[:,-1]))
In [59]: p1 = mat[:,-1].dot(mat[:,:-1])
...: p2 = norm(mat[:,:-1],axis=0)*norm(mat[:,-1])
...: out1 = p1/p2
In [60]: np.allclose(out, out1)
Out[60]: True
计时 -
In [61]: %%timeit
...: out = np.empty(mat.shape[1]-1)
...: for i in range(mat.shape[1]-1):
...: out[i] = dot(mat[:,i], mat[:,-1])/(norm(mat[:,i])*norm(mat[:,-1]))
18.5 ms ± 977 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [62]: %%timeit
...: p1 = mat[:,-1].dot(mat[:,:-1])
...: p2 = norm(mat[:,:-1],axis=0)*norm(mat[:,-1])
...: out1 = p1/p2
939 µs ± 29.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# @yatu's soln
In [89]: a = mat
In [90]: %timeit cosine_similarity(a[None,:,-1] , a.T[:-1])
2.47 ms ± 461 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
进一步优化 norm
和 einsum
或者,我们可以用 np.einsum
计算 p2
。
因此,norm(mat[:,:-1],axis=0)
可以替换为:
np.sqrt(np.einsum('ij,ij->j',mat[:,:-1],mat[:,:-1]))
因此,给我们一个修改后的 p2
:
p2 = np.sqrt(np.einsum('ij,ij->j',mat[:,:-1],mat[:,:-1]))*norm(mat[:,-1])
与之前相同设置的计时 -
In [82]: %%timeit
...: p1 = mat[:,-1].dot(mat[:,:-1])
...: p2 = np.sqrt(np.einsum('ij,ij->j',mat[:,:-1],mat[:,:-1]))*norm(mat[:,-1])
...: out1 = p1/p2
607 µs ± 132 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
30x+
比 loopy one 提速!
我有一个形状为 (149,1001) 的 TF-IDF 矩阵。想要的是计算最后一列的余弦相似度,所有列
这是我做的
from numpy import dot
from numpy.linalg import norm
for i in range(mat.shape[1]-1):
cos_sim = dot(mat[:,i], mat[:,-1])/(norm(mat[:,i])*norm(mat[:,-1]))
cos_sim
但是这个循环让它变慢了。那么,有什么有效的方法吗?我只想用 numpy
有一个 sklearn 函数可以计算向量之间的余弦相似度,cosine_similarity
。这是一个带有示例数组的用例:
a = np.random.randint(0,10,(5,5))
print(a)
array([[5, 2, 0, 4, 1],
[4, 2, 8, 2, 4],
[9, 7, 4, 9, 7],
[4, 6, 0, 1, 3],
[1, 1, 2, 5, 0]])
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(a[None,:,-1] , a.T[:-1])
# array([[0.94022805, 0.91705665, 0.75592895, 0.79921221, 1. ]])
其中 a[None,-1]
是 a
中的最后一列,重新整形以使两个矩阵的形状相同 Mat.shape[1]
,这是函数的要求:
a[None,:,-1]
# array([[1, 4, 7, 3, 0]])
并且通过转置,结果将是 cosine_similarity
和所有其他列。
检查问题的解决方案:
from numpy import dot
from numpy.linalg import norm
cos_sim = []
for i in range(a.shape[1]-1):
cos_sim.append(dot(a[:,i], a[:,-1])/(norm(a[:,i])*norm(a[:,-1])))
np.allclose(cos_sim, cosine_similarity(a[None,:,-1] , a.T[:-1]))
# True
杠杆2D
向量化matrix-multiplication
这是 NumPy 在二维数据上使用 matrix-multiplication 的一个 -
p1 = mat[:,-1].dot(mat[:,:-1])
p2 = norm(mat[:,:-1],axis=0)*norm(mat[:,-1])
out1 = p1/p2
解释: p1
是 dot(mat[:,i], mat[:,-1])
循环的向量化等价物。 p2
属于 (norm(mat[:,i])*norm(mat[:,-1]))
.
用于验证的样本运行 -
In [57]: np.random.seed(0)
...: mat = np.random.rand(149,1001)
In [58]: out = np.empty(mat.shape[1]-1)
...: for i in range(mat.shape[1]-1):
...: out[i] = dot(mat[:,i], mat[:,-1])/(norm(mat[:,i])*norm(mat[:,-1]))
In [59]: p1 = mat[:,-1].dot(mat[:,:-1])
...: p2 = norm(mat[:,:-1],axis=0)*norm(mat[:,-1])
...: out1 = p1/p2
In [60]: np.allclose(out, out1)
Out[60]: True
计时 -
In [61]: %%timeit
...: out = np.empty(mat.shape[1]-1)
...: for i in range(mat.shape[1]-1):
...: out[i] = dot(mat[:,i], mat[:,-1])/(norm(mat[:,i])*norm(mat[:,-1]))
18.5 ms ± 977 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [62]: %%timeit
...: p1 = mat[:,-1].dot(mat[:,:-1])
...: p2 = norm(mat[:,:-1],axis=0)*norm(mat[:,-1])
...: out1 = p1/p2
939 µs ± 29.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# @yatu's soln
In [89]: a = mat
In [90]: %timeit cosine_similarity(a[None,:,-1] , a.T[:-1])
2.47 ms ± 461 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
进一步优化 norm
和 einsum
或者,我们可以用 np.einsum
计算 p2
。
因此,norm(mat[:,:-1],axis=0)
可以替换为:
np.sqrt(np.einsum('ij,ij->j',mat[:,:-1],mat[:,:-1]))
因此,给我们一个修改后的 p2
:
p2 = np.sqrt(np.einsum('ij,ij->j',mat[:,:-1],mat[:,:-1]))*norm(mat[:,-1])
与之前相同设置的计时 -
In [82]: %%timeit
...: p1 = mat[:,-1].dot(mat[:,:-1])
...: p2 = np.sqrt(np.einsum('ij,ij->j',mat[:,:-1],mat[:,:-1]))*norm(mat[:,-1])
...: out1 = p1/p2
607 µs ± 132 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
30x+
比 loopy one 提速!