如何在 Pandas 中获得每组的平均成对余弦相似度

How to get average pairwise cosine similarity per group in Pandas

我有一个示例数据框如下

df=pd.DataFrame(np.array([['facebook', "women tennis"], ['facebook', "men basketball"], ['facebook', 'club'],['apple', "vice president"], ['apple', 'swimming contest']]),columns=['firm','text'])

现在我想使用词嵌入计算每个公司内的文本相似度。例如,facebook 的平均余弦相似度是第 0、1 和 2 行之间的余弦相似度。最终的数据框应该在每个公司的每一行旁边有一列 ['mean_cos_between_items']。每个公司的值都相同,因为这是公司内部的成对比较。

我写了下面的代码:

import gensim
from gensim import utils
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
from gensim.scripts.glove2word2vec import glove2word2vec
from sklearn.metrics.pairwise import cosine_similarity

 # map each word to vector space
    def represent(sentence):
        vectors = []
        for word in sentence:
            try:
                vector = model.wv[word]
                vectors.append(vector)
            except KeyError:
                pass
        return np.array(vectors).mean(axis=0)
    
    # get average if more than 1 word is included in the "text" column
    def document_vector(items):
        # remove out-of-vocabulary words
        doc = [word for word in items if word in model_glove.vocab]
        if doc:
            doc_vector = model_glove[doc]
            mean_vec=np.mean(doc_vector, axis=0)
        else:
            mean_vec = None
        return mean_vec
    
# get average pairwise cosine distance score 
def mean_cos_sim(grp):
   output = []
   for i,j in combinations(grp.index.tolist(),2 ): 
       doc_vec=document_vector(grp.iloc[i]['text'])
       if doc_vec is not None and len(doc_vec) > 0:      
           sim = cosine_similarity(document_vector(grp.iloc[i]['text']).reshape(1,-1),document_vector(grp.iloc[j]['text']).reshape(1,-1))
           output.append([i, j, sim])
       return np.mean(np.array(output), axis=0)

# save the result to a new column    
df['mean_cos_between_items']=df.groupby(['firm']).apply(mean_cos_sim)

但是,我得到以下错误:

你能帮忙吗?谢谢!

删除model_glove.vocab中的.vocab,当前版本的gensim不再支持:编辑:还需要split()来遍历单词和这里不是字符。

# get average if more than 1 word is included in the "text" column
def document_vector(items):
    # remove out-of-vocabulary words
    doc = [word for word in items.split() if word in model_glove]
    if doc:
        doc_vector = model_glove[doc]
        mean_vec = np.mean(doc_vector, axis=0)
    else:
        mean_vec = None
    return mean_vec

在这里,当您想遍历值时,您会遍历索引元组,因此删除 .index。你还把所有的值都放在 output 中,包括单词 (/indices) ij,所以如果你想得到它们的平均值,你必须指定你想要的平均值.由于您似乎不需要 ij,您可以只将结果 sim 放入列表中,然后取列表平均值:

# get pairwise cosine similarity score
def mean_cos_sim(grp):
    output = []
    for i, j in combinations(grp.tolist(), 2):
        if document_vector(i) is not None and len(document_vector(i)) > 0:
            sim = cosine_similarity(document_vector(i).reshape(1, -1), document_vector(j).reshape(1, -1))
            output.append(sim)
    return np.mean(output, axis=0)

在这里,您尝试将结果添加为一列,但行数会有所不同,因为结果 DataFrame 每个公司只有一行,而原始 DataFrame 每个文本只有一行。因此,您必须创建一个新的 DataFrame(然后您可以选择 merge/join 使用基于 firm 列的原始 DataFrame):

df = pd.DataFrame(np.array(
    [['facebook', "women tennis"], ['facebook', "men basketball"], ['facebook', 'club'],
     ['apple', "vice president"], ['apple', 'swimming contest']]), columns=['firm', 'text'])
df_grpd = df.groupby(['firm'])["text"].apply(mean_cos_sim)

哪个整体会给你(编辑:更新)

print(df_grpd)
> firm
  apple       [[0.53190523]]
  facebook    [[0.83989316]]
  Name: text, dtype: object

编辑:

我刚刚注意到超高分的原因是它缺少标记化,请参阅更改的部分。没有 split() 这只是比较往往非常高的字符相似性。

注意sklearn.metrics.pairwise.cosine_similarity,当传递单个矩阵时Xautomatically returns the pairwise similarities between all samples in X。即,无需手动构建对。

假设你用这样的东西构建你的平均嵌入(我在这里使用 glove-twitter-25),

def mean_embeddings(s):
    """Transfer a list of words into mean embedding"""
    return np.mean([model_glove.get_vector(x) for x in s], axis=0)

df["embeddings"] = df.text.str.split().apply(mean_embeddings)

所以 df.embeddings 结果

>>> df.embeddings
0    [-0.2597, -0.153495, -0.5106895, -1.070115, 0....
1    [0.0600965, 0.39806002, -0.45810497, -1.375365...
2    [-0.43819, 0.66232, 0.04611, -0.91103, 0.32231...
3    [0.1912625, 0.0066999793, -0.500785, -0.529915...
4    [-0.82556, 0.24555385, 0.38557374, -0.78941, 0...
Name: embeddings, dtype: object

你可以像这样得到平均成对余弦相似度,重点是你可以直接应用 cosine_similarity到每个组的充分准备矩阵:

(
 df.groupby("firm").embeddings # extract 'embeddings' for each group
 .apply(np.stack) # turns sequence of arrays into proper matrix
 .apply(cosine_similarity) # the magic: compute pairwise similarity matrix
 .apply(np.mean) # get the mean
)

对于我使用的模型,结果是:

firm
apple       0.765953
facebook    0.893262
Name: embeddings, dtype: float32