在 keras 中使用预训练的 gensim Word2vec 嵌入

Using pretrained gensim Word2vec embedding in keras

我在gensim中训练过word2vec。在 Keras 中,我想用它来制作使用该词嵌入的句子矩阵。由于存储所有句子的矩阵非常 space 且内存效率低下。所以,我想在 Keras 中制作嵌入层来实现这一点,以便它可以用于更多层(LSTM)。你能详细告诉我怎么做吗?

PS: 和其他问题不一样,因为我用的是gensim来训练word2vec,而不是keras。

假设您有以下需要编码的数据

docs = ['Well done!',
        'Good work',
        'Great effort',
        'nice work',
        'Excellent!',
        'Weak',
        'Poor effort!',
        'not good',
        'poor work',
        'Could have done better.']

然后您必须像这样使用 Keras 中的 Tokenizer 对其进行标记化,并找到 vocab_size

t = Tokenizer()
t.fit_on_texts(docs)
vocab_size = len(t.word_index) + 1

然后您可以将其编码为这样的序列

encoded_docs = t.texts_to_sequences(docs)
print(encoded_docs)

然后您可以填充序列,使所有序列的长度固定

max_length = 4
padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post')

然后使用word2vec模型制作embedding矩阵

# load embedding as a dict
def load_embedding(filename):
    # load embedding into memory, skip first line
    file = open(filename,'r')
    lines = file.readlines()[1:]
    file.close()
    # create a map of words to vectors
    embedding = dict()
    for line in lines:
        parts = line.split()
        # key is string word, value is numpy array for vector
        embedding[parts[0]] = asarray(parts[1:], dtype='float32')
    return embedding

# create a weight matrix for the Embedding layer from a loaded embedding
def get_weight_matrix(embedding, vocab):
    # total vocabulary size plus 0 for unknown words
    vocab_size = len(vocab) + 1
    # define weight matrix dimensions with all 0
    weight_matrix = zeros((vocab_size, 100))
    # step vocab, store vectors using the Tokenizer's integer mapping
    for word, i in vocab.items():
        weight_matrix[i] = embedding.get(word)
    return weight_matrix

# load embedding from file
raw_embedding = load_embedding('embedding_word2vec.txt')
# get vectors in the right order
embedding_vectors = get_weight_matrix(raw_embedding, t.word_index)

一旦你有了嵌入矩阵,你就可以像这样在 Embedding 层中使用它

e = Embedding(vocab_size, 100, weights=[embedding_vectors], input_length=4, trainable=False)

这一层可以用来制作这样的模型

model = Sequential()
e = Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=4, trainable=False)
model.add(e)
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
# fit the model
model.fit(padded_docs, labels, epochs=50, verbose=0)

所有代码改编自this awesome blog post。关注它以了解有关使用 Glove

的嵌入的更多信息

有关 word2vec 的使用,请参阅 this post

我的 gensim 训练的 w2v 模型代码。假设在 w2v 模型中训练的所有单词现在都是一个名为 all_words.

的列表变量
from keras.preprocessing.text import Tokenizer
import gensim
import pandas as pd
import numpy as np
from itertools import chain

w2v = gensim.models.Word2Vec.load("models/w2v.model")
vocab = w2v.wv.vocab    
t = Tokenizer()

vocab_size = len(all_words) + 1
t.fit_on_texts(all_words)

def get_weight_matrix():
    # define weight matrix dimensions with all 0
    weight_matrix = np.zeros((vocab_size, w2v.vector_size))
    # step vocab, store vectors using the Tokenizer's integer mapping
    for i in range(len(all_words)):
        weight_matrix[i + 1] = w2v[all_words[i]]
    return weight_matrix

embedding_vectors = get_weight_matrix()
emb_layer = Embedding(vocab_size, output_dim=w2v.vector_size, weights=[embedding_vectors], input_length=FIXED_LENGTH, trainable=False)

有了新的 Gensim 版本,这很容易:

w2v_model.wv.get_keras_embedding(train_embeddings=False)

你有你的 Keras 嵌入层