优化目标必须是 link
Optimization Target must be a link
我有一个使用 chainer.Chain 编写的 4 个线性层的自动编码器模型。 运行 培训师部分的 optimizer.setup 行给我以下错误:
TypeError Traceback (most recent call
last)
<ipython-input-9-a2aabc58d467> in <module>()
8
9 optimizer = optimizers.AdaDelta()
---> 10 optimizer.setup(sda)
11
12 train_iter = iterators.SerialIterator(train_data,batchsize)
/usr/local/lib/python3.6/dist-packages/chainer/optimizer.py in setup(self,
link)
415 """
416 if not isinstance(link, link_module.Link):
--> 417 raise TypeError('optimization target must be a link')
418 self.target = link
419 self.t = 0
TypeError: optimization target must be a link
link到class的StackedAutoEncoder如下:
StackAutoEncoder link
用于编写classAutoEncoder的link到classNNBase如下:
NNBase link
model = chainer.Chain(
enc1=L.Linear(1764, 200),
enc2=L.Linear(200, 30),
dec2=L.Linear(30, 200),
dec1=L.Linear(200, 1764)
)
sda = StackedAutoEncoder(model, gpu=0)
sda.set_order(('enc1', 'enc2'), ('dec2', 'dec1'))
sda.set_optimizer(Opt.AdaDelta)
sda.set_encode(encode)
sda.set_decode(decode)
from chainer import iterators, training, optimizers
from chainer import Link, Chain, ChainList
optimizer = optimizers.AdaDelta()
optimizer.setup(sda)
train_iter = iterators.SerialIterator(train_data,batchsize)
valid_iter = iterators.SerialIterator(test_data,batchsize)
updater = training.StandardUpdater(train_iter,optimizer)
trainer = training.Trainer(updater,(epoch,"epoch"),out="result")
from chainer.training import extensions
trainer.extend(extensions.Evaluator(valid_iter, sda, device=gpu))
链由链接组成。我想了解为什么优化器无法识别 StackedAutoencoder(model) 的 sda?
StackedAutoencoder
继承NNBase
class,后者继承object
class,所以不是chainer.Chain
class。
如何定义自己的网络可以参考官方示例。
例如,MNIST example定义MLP如下:
class MLP(chainer.Chain):
def __init__(self, n_units, n_out):
super(MLP, self).__init__()
with self.init_scope():
# the size of the inputs to each layer will be inferred
self.l1 = L.Linear(None, n_units) # n_in -> n_units
self.l2 = L.Linear(None, n_units) # n_units -> n_units
self.l3 = L.Linear(None, n_out) # n_units -> n_out
def forward(self, x):
h1 = F.relu(self.l1(x))
h2 = F.relu(self.l2(h1))
return self.l3(h2)
我有一个使用 chainer.Chain 编写的 4 个线性层的自动编码器模型。 运行 培训师部分的 optimizer.setup 行给我以下错误:
TypeError Traceback (most recent call
last)
<ipython-input-9-a2aabc58d467> in <module>()
8
9 optimizer = optimizers.AdaDelta()
---> 10 optimizer.setup(sda)
11
12 train_iter = iterators.SerialIterator(train_data,batchsize)
/usr/local/lib/python3.6/dist-packages/chainer/optimizer.py in setup(self,
link)
415 """
416 if not isinstance(link, link_module.Link):
--> 417 raise TypeError('optimization target must be a link')
418 self.target = link
419 self.t = 0
TypeError: optimization target must be a link
link到class的StackedAutoEncoder如下: StackAutoEncoder link
用于编写classAutoEncoder的link到classNNBase如下: NNBase link
model = chainer.Chain(
enc1=L.Linear(1764, 200),
enc2=L.Linear(200, 30),
dec2=L.Linear(30, 200),
dec1=L.Linear(200, 1764)
)
sda = StackedAutoEncoder(model, gpu=0)
sda.set_order(('enc1', 'enc2'), ('dec2', 'dec1'))
sda.set_optimizer(Opt.AdaDelta)
sda.set_encode(encode)
sda.set_decode(decode)
from chainer import iterators, training, optimizers
from chainer import Link, Chain, ChainList
optimizer = optimizers.AdaDelta()
optimizer.setup(sda)
train_iter = iterators.SerialIterator(train_data,batchsize)
valid_iter = iterators.SerialIterator(test_data,batchsize)
updater = training.StandardUpdater(train_iter,optimizer)
trainer = training.Trainer(updater,(epoch,"epoch"),out="result")
from chainer.training import extensions
trainer.extend(extensions.Evaluator(valid_iter, sda, device=gpu))
链由链接组成。我想了解为什么优化器无法识别 StackedAutoencoder(model) 的 sda?
StackedAutoencoder
继承NNBase
class,后者继承object
class,所以不是chainer.Chain
class。
如何定义自己的网络可以参考官方示例。 例如,MNIST example定义MLP如下:
class MLP(chainer.Chain):
def __init__(self, n_units, n_out):
super(MLP, self).__init__()
with self.init_scope():
# the size of the inputs to each layer will be inferred
self.l1 = L.Linear(None, n_units) # n_in -> n_units
self.l2 = L.Linear(None, n_units) # n_units -> n_units
self.l3 = L.Linear(None, n_out) # n_units -> n_out
def forward(self, x):
h1 = F.relu(self.l1(x))
h2 = F.relu(self.l2(h1))
return self.l3(h2)