PyTorch / Gensim - 如何加载预训练词嵌入

PyTorch / Gensim - How to load pre-trained word embeddings

我想用 gensim 将预训练的 word2vec 嵌入加载到 PyTorch 嵌入层中。

所以我的问题是,如何将 gensim 加载的嵌入权重加载到 PyTorch 嵌入层中。

提前致谢!

我觉得很简单。只需将 gensim 中的嵌入权重复制到 PyTorch 中的相应权重即可 embedding layer.

您需要确保两件事是正确的:首先是权重形状必须正确,其次是权重必须转换为 PyTorch FloatTensor 类型。

我只是想报告我关于使用 PyTorch 加载 gensim 嵌入的发现。


  • PyTorch 0.4.0 及更新版本的解决方案:

v0.4.0 开始,有一个新功能 from_pretrained() 使加载嵌入变得非常舒适。 这是文档中的示例。

import torch
import torch.nn as nn

# FloatTensor containing pretrained weights
weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]])
embedding = nn.Embedding.from_pretrained(weight)
# Get embeddings for index 1
input = torch.LongTensor([1])
embedding(input)

来自 gensim 的权重可以通过以下方式轻松获得:

import gensim
model = gensim.models.KeyedVectors.load_word2vec_format('path/to/file')
weights = torch.FloatTensor(model.vectors) # formerly syn0, which is soon deprecated

如@Guglie 所述:在较新的 gensim 版本中,权重可以通过 model.wv:

获得
weights = model.wv

  • PyTorch 版本 0.3.1 及更早版本的解决方案:

我正在使用 0.3.1 版本,from_pretrained() 在此版本中不可用。

因此我创建了自己的 from_pretrained,这样我也可以将它与 0.3.1 一起使用。

PyTorch 版本 0.3.1 或更低版本 from_pretrained 的代码:

def from_pretrained(embeddings, freeze=True):
    assert embeddings.dim() == 2, \
         'Embeddings parameter is expected to be 2-dimensional'
    rows, cols = embeddings.shape
    embedding = torch.nn.Embedding(num_embeddings=rows, embedding_dim=cols)
    embedding.weight = torch.nn.Parameter(embeddings)
    embedding.weight.requires_grad = not freeze
    return embedding

嵌入可以像这样加载:

embedding = from_pretrained(weights)

我希望这对某人有帮助。

我有同样的问题,只是我使用 torchtext library with pytorch as it helps with padding, batching, and other things. This is what I've done to load pre-trained embeddings with torchtext 0.3.0 and to pass them to pytorch 0.4.1 (the pytorch part uses the method mentioned by ):

import torch
import torch.nn as nn
import torchtext.data as data
import torchtext.vocab as vocab

# use torchtext to define the dataset field containing text
text_field = data.Field(sequential=True)

# load your dataset using torchtext, e.g.
dataset = data.Dataset(examples=..., fields=[('text', text_field), ...])

# build vocabulary
text_field.build_vocab(dataset)

# I use embeddings created with
# model = gensim.models.Word2Vec(...)
# model.wv.save_word2vec_format(path_to_embeddings_file)

# load embeddings using torchtext
vectors = vocab.Vectors(path_to_embeddings_file) # file created by gensim
text_field.vocab.set_vectors(vectors.stoi, vectors.vectors, vectors.dim)

# when defining your network you can then use the method mentioned by blue-phoenox
embedding = nn.Embedding.from_pretrained(torch.FloatTensor(text_field.vocab.vectors))

# pass data to the layer
dataset_iter = data.Iterator(dataset, ...)
for batch in dataset_iter:
    ...
    embedding(batch.text)
from gensim.models import Word2Vec

model = Word2Vec(reviews,size=100, window=5, min_count=5, workers=4)
#gensim model created

import torch

weights = torch.FloatTensor(model.wv.vectors)
embedding = nn.Embedding.from_pretrained(weights)

我自己在理解文档方面遇到了很多问题,而且周围没有那么多好的例子。希望这个例子可以帮助其他人。它是一个简单的分类器,采用 matrix_embeddings 中的预训练嵌入。通过将 requires_grad 设置为 false,我们确保不会更改它们。

class InferClassifier(nn.Module):
  def __init__(self, input_dim, n_classes, matrix_embeddings):
    """initializes a 2 layer MLP for classification.
    There are no non-linearities in the original code, Katia instructed us 
    to use tanh instead"""

    super(InferClassifier, self).__init__()

    #dimensionalities
    self.input_dim = input_dim
    self.n_classes = n_classes
    self.hidden_dim = 512

    #embedding
    self.embeddings = nn.Embedding.from_pretrained(matrix_embeddings)
    self.embeddings.requires_grad = False

    #creates a MLP
    self.classifier = nn.Sequential(
            nn.Linear(self.input_dim, self.hidden_dim),
            nn.Tanh(), #not present in the original code.
            nn.Linear(self.hidden_dim, self.n_classes))

  def forward(self, sentence):
    """forward pass of the classifier
    I am not sure it is necessary to make this explicit."""

    #get the embeddings for the inputs
    u = self.embeddings(sentence)

    #forward to the classifier
    return self.classifier(x)

sentence 是一个向量,其索引为 matrix_embeddings 而不是单词。

有类似问题:"after training and saving embeddings in binary format using gensim, how I load them to torchtext?"

我只是将文件保存为 txt 格式,然后按照加载自定义词嵌入的精彩 tutorial

def convert_bin_emb_txt(out_path,emb_file):
    txt_name = basename(emb_file).split(".")[0] +".txt"
    emb_txt_file = os.path.join(out_path,txt_name)
    emb_model = KeyedVectors.load_word2vec_format(emb_file,binary=True)
    emb_model.save_word2vec_format(emb_txt_file,binary=False)
    return emb_txt_file

emb_txt_file = convert_bin_emb_txt(out_path,emb_bin_file)
custom_embeddings = vocab.Vectors(name=emb_txt_file,
                                  cache='custom_embeddings',
                                  unk_init=torch.Tensor.normal_)

TEXT.build_vocab(train_data,
                 max_size=MAX_VOCAB_SIZE,
                 vectors=custom_embeddings,
                 unk_init=torch.Tensor.normal_)

已测试:PyTorch:1.2.0 和 TorchText:0.4.0。

我添加了这个答案,因为对于接受的答案,我不确定如何遵循链接 tutorial 并使用正态分布初始化所有不在嵌入中的单词以及如何使向量和等于零.