如何在 Keras Word Embedding 层中查找相似词
How to find similar words in Keras Word Embedding layer
从斯坦福大学的 CS244N 课程中,我知道 Gensim 提供了一种很好的方法来处理嵌入数据:most_similar
我试图在 Keras 嵌入层中找到一些等效项,但我找不到。不可能从 Keras 开箱即用?还是上面有任何包装纸?
一个简单的实现是:
def most_similar(emb_layer, pos_word_idxs, neg_word_idxs=[], top_n=10):
weights = emb_layer.weights[0]
mean = []
for idx in pos_word_idxs:
mean.append(weights.value()[idx, :])
for idx in neg_word_idxs:
mean.append(weights.value()[idx, :] * -1)
mean = tf.reduce_mean(mean, 0)
dists = tf.tensordot(weights, mean, 1)
best = tf.math.top_k(dists, top_n)
# Mask words used as pos or neg
mask = []
for v in set(pos_word_idxs + neg_word_idxs):
mask.append(tf.cast(tf.equal(best.indices, v), tf.int8))
mask = tf.less(tf.reduce_sum(mask, 0), 1)
return tf.boolean_mask(best.indices, mask), tf.boolean_mask(best.values, mask)
当然你需要知道单词的索引。我假设你有一个 word2idx
映射,所以你可以这样得到它们:[word2idx[w] for w in pos_words]
.
使用方法:
# Assuming the first layer is the Embedding and you are interested in word with idx 10
idxs, vals = most_similar(model.layers[0], [10])
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
idxs = sess.run(idxs)
vals = sess.run(vals)
该功能的一些潜在改进:
- 确保returns
top_n
个字(屏蔽后returns个字少)
gensim
使用规范化嵌入 (L2_norm)
从斯坦福大学的 CS244N 课程中,我知道 Gensim 提供了一种很好的方法来处理嵌入数据:most_similar
我试图在 Keras 嵌入层中找到一些等效项,但我找不到。不可能从 Keras 开箱即用?还是上面有任何包装纸?
一个简单的实现是:
def most_similar(emb_layer, pos_word_idxs, neg_word_idxs=[], top_n=10):
weights = emb_layer.weights[0]
mean = []
for idx in pos_word_idxs:
mean.append(weights.value()[idx, :])
for idx in neg_word_idxs:
mean.append(weights.value()[idx, :] * -1)
mean = tf.reduce_mean(mean, 0)
dists = tf.tensordot(weights, mean, 1)
best = tf.math.top_k(dists, top_n)
# Mask words used as pos or neg
mask = []
for v in set(pos_word_idxs + neg_word_idxs):
mask.append(tf.cast(tf.equal(best.indices, v), tf.int8))
mask = tf.less(tf.reduce_sum(mask, 0), 1)
return tf.boolean_mask(best.indices, mask), tf.boolean_mask(best.values, mask)
当然你需要知道单词的索引。我假设你有一个 word2idx
映射,所以你可以这样得到它们:[word2idx[w] for w in pos_words]
.
使用方法:
# Assuming the first layer is the Embedding and you are interested in word with idx 10
idxs, vals = most_similar(model.layers[0], [10])
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
idxs = sess.run(idxs)
vals = sess.run(vals)
该功能的一些潜在改进:
- 确保returns
top_n
个字(屏蔽后returns个字少) gensim
使用规范化嵌入 (L2_norm)