bert 的输出在正向函数中给出 str 而不是张量

output from bert gives str not tensor in forward function

class BertModel(nn.Module):
    def __init__(self,pre_trained='bert-base-uncased'):
        super().__init__()        
        self.bert = AutoModel.from_pretrained(pre_trained)
        self.dropout = nn.Dropout(0.1)
        self.relu =  nn.ReLU()
        self.fc1 = nn.Linear(768,512)
        self.fc2 = nn.Linear(512,6)
      
    
        
    def forward(self,inputs, mask, labels):
        
        pooled, cls_hs = self.bert(input_ids=inputs,attention_mask=mask)
        print(pooled)
        print(cls_hs)  
        print(inputs) 
        print(mask)   
        x = self.fc1(cls_hs)
        print(1) 
        x = self.relu(x)
        print(2) 
        x = self.dropout(x)
        print(3) 
      # output layer
        x = self.fc2(x)
        print(4)
      # apply softmax activation
        x = self.softmax(x)
        print(5)

last_hidden_state pooler_output

tensor([[ 101, 2342, 2393, ..., 0, 0, 0], [ 101, 14477, 4779, ..., 4839, 6513, 102], [ 101, 14777, 2111, ..., 13677, 3613, 102], ..., [ 101, 2113, 14047, ..., 0, 0, 0], [ 101, 5683, 3008, ..., 0, 0, 0], [ 101, 19046, 2075, ..., 2050, 3308, 102]]) tensor([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], ..., [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 1, 1, 1]])

线性(输入、权重、偏差) 如果 has_torch_function_variadic(输入、权重、偏差): return handle_torch_function(linear, (input, weight, bias), 输入, 权重, 偏见=偏见) return torch._C._nn.linear(输入、权重、偏差)

TypeError: linear(): argument 'input' (position 1) must be Tensor, not str

pooled, cls_hs printed as string last_hidden_state, pooler_output tensor with out any tensor

尝试将下载预训练 bert 模型的行替换为:

self.model = AutoModel.from_pretrained(pre_trained, return_dict=False)