我无法使用来自 fastai.text 的 pretrained_model=URLs.WT103
i can't use pretrained_model=URLs.WT103 from fastai.text
我正在尝试创建一个模型来预测单词作为输入和输出作为段落。尝试将 fastai|text 上给出的相同示例应用到我自己的数据集时出现错误。它在以下步骤中给出错误。当您查看该站点时,在您获得以下代码之前,这并不重要。但是这段代码给出了一个错误。导致此错误的原因可能是什么?
代码:
from fastai import *
from fastai.text import *
path = untar_data(URLs.IMDB_SAMPLE)
df = pd.read_csv(path/'texts.csv')
# Language model data
data_lm = TextLMDataBunch.from_csv(path, 'texts.csv')
# Classifier model data
data_clas = TextClasDataBunch.from_csv(path, 'texts.csv',
vocab=data_lm.train_ds.vocab, bs=32)
data_lm.save()
data_clas.save()
data_lm = TextLMDataBunch.load(path)
data_clas = TextClasDataBunch.load(path, bs=32)
learn = language_model_learner(data_lm, pretrained_model=URLs.WT103, drop_mult=0.5)
learn.fit_one_cycle(1, 1e-2)
错误代码:
learn = language_model_learner(data_lm, pretrained_model=URLs.WT103, drop_mult=0.5)
输出:
102 if not ps: return None
103 if b is None: return ps[0].requires_grad
--> 104 for p in ps: p.requires_grad=b
105
106 def trainable_params(m:nn.Module)->ParamList:
RuntimeError: you can only change requires_grad flags of leaf variables. If you want to use a computed variable in a subgraph that doesn't require differentiation use var_no_grad = var.detach().
使用以下命令将 grad 设置为 false:torch.set_grad_enabled(False)
(在创建学习者对象之前使用它)
并用 torch.enable_grad():
包装函数 (learn.fit cycle()) 的调用
我正在尝试创建一个模型来预测单词作为输入和输出作为段落。尝试将 fastai|text 上给出的相同示例应用到我自己的数据集时出现错误。它在以下步骤中给出错误。当您查看该站点时,在您获得以下代码之前,这并不重要。但是这段代码给出了一个错误。导致此错误的原因可能是什么?
代码:
from fastai import *
from fastai.text import *
path = untar_data(URLs.IMDB_SAMPLE)
df = pd.read_csv(path/'texts.csv')
# Language model data
data_lm = TextLMDataBunch.from_csv(path, 'texts.csv')
# Classifier model data
data_clas = TextClasDataBunch.from_csv(path, 'texts.csv',
vocab=data_lm.train_ds.vocab, bs=32)
data_lm.save()
data_clas.save()
data_lm = TextLMDataBunch.load(path)
data_clas = TextClasDataBunch.load(path, bs=32)
learn = language_model_learner(data_lm, pretrained_model=URLs.WT103, drop_mult=0.5)
learn.fit_one_cycle(1, 1e-2)
错误代码:
learn = language_model_learner(data_lm, pretrained_model=URLs.WT103, drop_mult=0.5)
输出:
102 if not ps: return None
103 if b is None: return ps[0].requires_grad
--> 104 for p in ps: p.requires_grad=b
105
106 def trainable_params(m:nn.Module)->ParamList:
RuntimeError: you can only change requires_grad flags of leaf variables. If you want to use a computed variable in a subgraph that doesn't require differentiation use var_no_grad = var.detach().
使用以下命令将 grad 设置为 false:torch.set_grad_enabled(False) (在创建学习者对象之前使用它)
并用 torch.enable_grad():
包装函数 (learn.fit cycle()) 的调用