计算从 4 mysql 个表中检索到的所有可能文本对的余弦相似度

Calculate cosine similarity of all possible text pairs retrieved from 4 mysql tables

我有 4 个带有架构的表(应用程序、text_id、标题、文本)。现在我想计算所有可能的文本对(标题和文本连接)之间的余弦相似度,并将它们最终存储在带有字段的 csv 文件中 (app1, app2, text_id1, text1, text_id 2, text2, cosine_similarity).

因为有很多可能的组合,所以应该 运行 非常有效。这里最常见的方法是什么?如果有任何指点,我将不胜感激。

编辑: 尽管提供的参考资料可能会触及我的问题,但我仍然不知道如何解决这个问题。有人可以提供有关完成此任务的策略的更多详细信息吗?除了计算余弦相似度之外,我还需要相应的文本对作为输出。

以下是计算一组文档之间的成对余弦相似度的最小示例(假设您已成功从数据库中检索到标题和文本)。

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity

# Assume thats the data we have (4 short documents)
data = [
    'I like beer and pizza',
    'I love pizza and pasta',
    'I prefer wine over beer',
    'Thou shalt not pass'
]

# Vectorise the data
vec = TfidfVectorizer()
X = vec.fit_transform(data) # `X` will now be a TF-IDF representation of the data, the first row of `X` corresponds to the first sentence in `data`

# Calculate the pairwise cosine similarities (depending on the amount of data that you are going to have this could take a while)
S = cosine_similarity(X)

'''
S looks as follows:
array([[ 1.        ,  0.4078538 ,  0.19297924,  0.        ],
       [ 0.4078538 ,  1.        ,  0.        ,  0.        ],
       [ 0.19297924,  0.        ,  1.        ,  0.        ],
       [ 0.        ,  0.        ,  0.        ,  1.        ]])

The first row of `S` contains the cosine similarities to every other element in `X`. 
For example the cosine similarity of the first sentence to the third sentence is ~0.193. 
Obviously the similarity of every sentence/document to itself is 1 (hence the diagonal of the sim matrix will be all ones). 
Given that all indices are consistent it is straightforward to extract the corresponding sentences to the similarities.
'''