预训练roberta关系抽取属性错误
pretrained roberta relation extraction attribute error
我正在尝试让以下预训练的拥抱脸模型工作:https://huggingface.co/mmoradi/Robust-Biomed-RoBERTa-RelationClassification
我使用以下代码:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("mmoradi/Robust-Biomed-RoBERTa-RelationClassification")
model = AutoModel.from_pretrained("mmoradi/Robust-Biomed-RoBERTa-RelationClassification")
inputs = tokenizer("""The colorectal cancer was caused by mutations in angina""")
outputs = model(**inputs)
出于某种原因,我在尝试生成输出时遇到以下错误,因此在我的代码的最后一行:
--> 796 input_shape = input_ids.size()
797 elif inputs_embeds is not None:
798 input_shape = inputs_embeds.size()[:-1]
AttributeError: 'list' object has no attribute 'size'
输入如下所示:
{'input_ids': [0, 133, 11311, 1688, 3894, 337, 1668, 21, 1726, 30, 28513, 11, 1480, 347, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
我不知道如何调试它,所以欢迎任何帮助或提示!
您必须在 return 中为 tokenizer
指定您想要的张量类型。如果不这样做,它将 return 一个包含两个列表(input_ids
和 attention_mask
)的字典:
inputs = tokenizer("""The colorectal cancer was caused by mutations in angina""", return_tensors="pt")
我正在尝试让以下预训练的拥抱脸模型工作:https://huggingface.co/mmoradi/Robust-Biomed-RoBERTa-RelationClassification
我使用以下代码:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("mmoradi/Robust-Biomed-RoBERTa-RelationClassification")
model = AutoModel.from_pretrained("mmoradi/Robust-Biomed-RoBERTa-RelationClassification")
inputs = tokenizer("""The colorectal cancer was caused by mutations in angina""")
outputs = model(**inputs)
出于某种原因,我在尝试生成输出时遇到以下错误,因此在我的代码的最后一行:
--> 796 input_shape = input_ids.size() 797 elif inputs_embeds is not None: 798 input_shape = inputs_embeds.size()[:-1]
AttributeError: 'list' object has no attribute 'size'
输入如下所示:
{'input_ids': [0, 133, 11311, 1688, 3894, 337, 1668, 21, 1726, 30, 28513, 11, 1480, 347, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
我不知道如何调试它,所以欢迎任何帮助或提示!
您必须在 return 中为 tokenizer
指定您想要的张量类型。如果不这样做,它将 return 一个包含两个列表(input_ids
和 attention_mask
)的字典:
inputs = tokenizer("""The colorectal cancer was caused by mutations in angina""", return_tensors="pt")