Word2Vec 模型的产品合并层与 Keras 函数 API

Product merge layers with Keras functionnal API for Word2Vec model

我正在尝试使用 Keras 实现带有负采样的 Word2Vec CBOW,遵循找到的代码 here:

EMBEDDING_DIM = 100

sentences = SentencesIterator('test_file.txt')
v_gen = VocabGenerator(sentences=sentences, min_count=5, window_size=3,
                       sample_threshold=-1, negative=5)

v_gen.scan_vocab()
v_gen.filter_vocabulary()
reverse_vocab = v_gen.generate_inverse_vocabulary_lookup('test_lookup')

# Generate embedding matrix with all values between -1/2d, 1/2d
embedding = np.random.uniform(-1.0 / (2 * EMBEDDING_DIM),
                              1.0 / (2 * EMBEDDING_DIM),
                              (v_gen.vocab_size + 3, EMBEDDING_DIM))

# Creating CBOW model
# Model has 3 inputs
# Current word index, context words indexes and negative sampled word indexes
word_index = Input(shape=(1,))
context = Input(shape=(2*v_gen.window_size,))
negative_samples = Input(shape=(v_gen.negative,))

# All inputs are processed through a common embedding layer
shared_embedding_layer = (Embedding(input_dim=(v_gen.vocab_size + 3),
                                    output_dim=EMBEDDING_DIM,
                                    weights=[embedding]))

word_embedding = shared_embedding_layer(word_index)
context_embeddings = shared_embedding_layer(context)
negative_words_embedding = shared_embedding_layer(negative_samples)

# Now the context words are averaged to get the CBOW vector
cbow = Lambda(lambda x: K.mean(x, axis=1),
              output_shape=(EMBEDDING_DIM,))(context_embeddings)

# Context is multiplied (dot product) with current word and negative
# sampled words
word_context_product = merge([word_embedding, cbow], mode='dot')
negative_context_product = merge([negative_words_embedding, cbow],
                                 mode='dot',
                                 concat_axis=-1)

# The dot products are outputted
model = Model(input=[word_index, context, negative_samples],
              output=[word_context_product, negative_context_product])

# Binary crossentropy is applied on the output
model.compile(optimizer='rmsprop', loss='binary_crossentropy')
print(model.summary())

model.fit_generator(v_gen.pretraining_batch_generator(reverse_vocab),
                    samples_per_epoch=10,
                    nb_epoch=1)

但是,我在合并部分遇到错误,因为嵌入层是 3D 张量,而 cbow 只有 2 维。我假设我需要将嵌入(即 [?, 1, 100])重塑为 [1, 100],但我找不到如何使用函数 API 重塑。 我正在使用 Tensorflow 后端。

此外,如果有人可以指出 CBOW 与 Keras 的其他实现(Gensim 免费),我很想看看它!

谢谢!

编辑:这是错误

Traceback (most recent call last):
  File "cbow.py", line 48, in <module>
    word_context_product = merge([word_embedding, cbow], mode='dot')
    .
    .
    .
ValueError: Shape must be rank 2 but is rank 3 for 'MatMul' (op: 'MatMul') with input shapes: [?,1,100], [?,100].
ValueError: Shape must be rank 2 but is rank 3 for 'MatMul' (op: 'MatMul') with input shapes: [?,1,100], [?,100].

确实需要重塑 word_embedding 张量。两种方法:

  • 要么使用从 keras.layers.core 导入的 Reshape() 层,这样做如下:

    word_embedding = Reshape((100,))(word_embedding)
    

    Reshape 的参数是一个具有目标形状的元组。

  • 或者你可以使用 Flatten() 层,也是从 keras.layers.core 导入的,像这样使用:

    word_embedding = Flatten()(word_embedding)
    

    不考虑任何参数,它只会删除 "empty" 个维度。

这对你有帮助吗?

编辑:

确实,第二个 merge() 有点棘手。 Keras 中的 dot 合并只接受相同等级的张量,所以相同 len(shape)。 所以你要做的是使用 Reshape() 层添加回 1 个空维度,然后使用特征 dot_axes 而不是 concat_axis 这与 dot 合并无关. 这就是我建议您的解决方案:

word_embedding = shared_embedding_layer(word_index)
# Shape output = (None,1,emb_size)
context_embeddings = shared_embedding_layer(context)
# Shape output = (None, 2*window_size, emb_size)
negative_words_embedding = shared_embedding_layer(negative_samples)
# Shape output = (None, negative, emb_size)

# Now the context words are averaged to get the CBOW vector
cbow = Lambda(lambda x: K.mean(x, axis=1),
                     output_shape=(EMBEDDING_DIM,))(context_embeddings)
# Shape output = (None, emb_size)
cbow = Reshape((1,emb_size))(cbow)
# Shape output = (None, 1, emb_size)

# Context is multiplied (dot product) with current word and negative
# sampled words
word_context_product = merge([word_embedding, cbow], mode='dot')
# Shape output = (None, 1, 1)
word_context_product = Flatten()(word_context_product)
# Shape output = (None,1)
negative_context_product = merge([negative_words_embedding, cbow], mode='dot',dot_axes=[2,2])
# Shape output = (None, negative, 1)
negative_context_product = Flatten()(negative_context_product)
# Shape output = (None, negative)

有效吗? :)

问题出在TF在矩阵乘法方面的刚性。合并 "dot" 模式调用后端 batch_dot() 函数,与 Theano 不同,TensorFlow 要求矩阵具有相同的秩:read here