PyTorch 中的 LSTM 模型实现

LSTM model implementation in PyTorch

我试图在 PyTorch 中实现 CNN+LSTM 模型,但我对 LSTM 部分有疑问(我以前从未使用过 LSTM)。你能写出多对一 LSTM 模型 class (Image-link: https://i.ibb.co/SRGWT5j/lstm.png )...

对于 Pytorch 中的 nn.LSTM,根据文档 https://pytorch.org/docs/stable/nn.html?highlight=lstm#torch.nn.LSTM

它将输入作为 (embedding_size_dimension , hidden_size_dimension , number_of_layers) (目前忽略双向参数,我们也可以通过初始 hidden_state 和 cell_state )

所以我们需要传递一个形状为[最大句子长度,批量大小,嵌入大小]

的张量

只是一个示例模型

class Model(nn.Module):
    def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5):
        super(Model, self).__init__()
        self.output_size = output_size
        self.n_layers = n_layers
        self.hidden_dim = hidden_dim

        self.embedding = nn.Embedding(vocab_size, embedding_dim)
        self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=drop_prob)

    def forward(self, sentence):
        batch_size = sentence.size(0)
        sentence = sentence.long()
        embeds = self.embedding(sentence)
        lstm_out, hidden = self.lstm(embeds)
        # so here lstm_out will be of [max sentence length , batch size , hidden size]
        # so for simple many-to-one we can just use output of last cell of LSTM
        out = lstm_out[-1,:,:]
        return out

你可以参考这个link,它很好地解释了 pytorch 中的 LSTM,它还有一个 SentimentNet 模型的示例

https://blog.floydhub.com/long-short-term-memory-from-zero-to-hero-with-pytorch/