AttributeError: 'GPT2Model' object has no attribute 'gradient_checkpointing'
AttributeError: 'GPT2Model' object has no attribute 'gradient_checkpointing'
我最初尝试在 Flask 中加载 GPT2 微调模型。在初始化函数期间使用以下方式加载模型:
app.modelgpt2 = torch.load('models/model_gpt2.pt', map_location=torch.device('cpu'))
app.modelgpt2tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
但是在按照下面的代码片段执行预测任务时:
from flask import current_app
input_ids = current_app.modelgpt2tokenizer.encode("sample sentence here", return_tensors='pt')
sample_outputs = current_app.modelgpt2.generate(input_ids,
do_sample=True,
top_k=50,
min_length=30,
max_length=300,
top_p=0.95,
temperature=0.7,
num_return_sequences=1)
它抛出问题中提到的以下错误:
AttributeError: 'GPT2Model' 对象没有属性 'gradient_checkpointing'
The error trace is listed starting from the model.generate
function:
File "/venv/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/venv/lib/python3.8/site-packages/transformers/generation_utils.py", line 1017, in generate
return self.sample(
File "/venv/lib/python3.8/site-packages/transformers/generation_utils.py", line 1531, in sample
outputs = self(
File "/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/venv/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1044, in forward
transformer_outputs = self.transformer(
File "/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/venv/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 861, in forward
print(self.gradient_checkpointing)
File "/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1177, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'GPT2Model' object has no attribute 'gradient_checkpointing'
检查 modeling_gpt2.py
,默认情况下 self.gradient_checkpointing
在 class.
的构造函数中设置 False
仅当框架 运行 使用 venv 或部署框架(如 uWSGI 或 gunicorn)时,才会发现此问题。
当使用 transformers 版本 4.10.0 而不是最新包时已解决。
来自高频:
Form the error message, it looks like you used torch.save to save your whole model (and not the weights), which is not recommended at all because when the model changes (like it did between 4.10 and 4.11) you then can’t reload it directly with torch.load.
Our advice is to always use save_pretrained/from_pretrained to save/load your models or if it’s not possible, to save the weights (model.state_dict) with torch.save and then reload them with model.load_state_dict, as this will works across different versions of the models.
我最初尝试在 Flask 中加载 GPT2 微调模型。在初始化函数期间使用以下方式加载模型:
app.modelgpt2 = torch.load('models/model_gpt2.pt', map_location=torch.device('cpu'))
app.modelgpt2tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
但是在按照下面的代码片段执行预测任务时:
from flask import current_app
input_ids = current_app.modelgpt2tokenizer.encode("sample sentence here", return_tensors='pt')
sample_outputs = current_app.modelgpt2.generate(input_ids,
do_sample=True,
top_k=50,
min_length=30,
max_length=300,
top_p=0.95,
temperature=0.7,
num_return_sequences=1)
它抛出问题中提到的以下错误: AttributeError: 'GPT2Model' 对象没有属性 'gradient_checkpointing'
The error trace is listed starting from the
model.generate
function: File "/venv/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs)
File "/venv/lib/python3.8/site-packages/transformers/generation_utils.py", line 1017, in generate return self.sample(
File "/venv/lib/python3.8/site-packages/transformers/generation_utils.py", line 1531, in sample outputs = self(
File "/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs)
File "/venv/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1044, in forward transformer_outputs = self.transformer(
File "/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs)
File "/venv/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 861, in forward print(self.gradient_checkpointing)
File "/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1177, in getattr raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'GPT2Model' object has no attribute 'gradient_checkpointing'
检查 modeling_gpt2.py
,默认情况下 self.gradient_checkpointing
在 class.
False
仅当框架 运行 使用 venv 或部署框架(如 uWSGI 或 gunicorn)时,才会发现此问题。 当使用 transformers 版本 4.10.0 而不是最新包时已解决。
来自高频:
Form the error message, it looks like you used torch.save to save your whole model (and not the weights), which is not recommended at all because when the model changes (like it did between 4.10 and 4.11) you then can’t reload it directly with torch.load.
Our advice is to always use save_pretrained/from_pretrained to save/load your models or if it’s not possible, to save the weights (model.state_dict) with torch.save and then reload them with model.load_state_dict, as this will works across different versions of the models.