从神经网络层中去除偏差
removing bias from neural network layer
我想删除偏置参数。我试图在定义我的神经网络的地方包含 thebias=None
,但它没有用。
net1 = NeuralNet(
layers=[ # three layers: one hidden layer
('input', layers.InputLayer),
#('hidden', layers.DenseLayer),
('output', layers.DenseLayer),
],
# layer parameters:
input_shape=(None,2), # 2 inputs
#hidden_num_units=200, # number of units in hidden layer
output_nonlinearity=None, # output layer uses identity function
output_num_units=1, # 1 target value
# optimization method:
update=nesterov_momentum,
update_learning_rate=0.01,
update_momentum=0.9,
regression=True, # flag to indicate we're dealing with regression problem
max_epochs=400, # we want to train this many epochs
verbose=1,
bias = None
)
根据 Lasagne Documentation for conv layers(密集层类似),您可以选择以下偏差选项:
b = None
至少根据 Lasagne 文档,任何层似乎都没有 "bias" 参数,而是使用 "b"。我不能代表 NoLearn,因为我不使用该软件包。
编辑:
这是一些千层面示例代码:
import lasagne
net = {}
net['input'] = lasagne.layers.InputLayer(shape=(None, 3, 224,224), input_var=None)
net['conv'] = lasagne.layers.Conv2DLayer(net['input'], num_filters=5, filter_size=3, b = None)
print net['conv'].get_params()
Returns:
[W]
单独,意味着没有偏差项。
对于 NoLearn 我不确定,因为我不使用那个包。
# Build the network yourself
inputs = InputLayer(shape=(None, 2))
network = DenseLayer(inputs, num_units=1, nonlinearity=None, b = None)
net1 = NeuralNet(
network,
#We don't need any of these parameters since we provided them above
# layer parameters:
#input_shape=(None,2), # 2 inputs
#hidden_num_units=200, # number of units in hidden layer
#output_nonlinearity=None, # output layer uses identity function
#output_num_units=1, # 1 target value
# optimization method:
update=nesterov_momentum,
update_learning_rate=0.01,
update_momentum=0.9,
regression=True, # flag to indicate we're dealing with regression problem
max_epochs=400, # we want to train this many epochs
verbose=1,
bias = None
)
我认为这应该可行。网络中可能有一个 kwarg 传递(我不记得了),但我认为如果没有给出任何参数,它默认是第一个参数。
我想删除偏置参数。我试图在定义我的神经网络的地方包含 thebias=None
,但它没有用。
net1 = NeuralNet(
layers=[ # three layers: one hidden layer
('input', layers.InputLayer),
#('hidden', layers.DenseLayer),
('output', layers.DenseLayer),
],
# layer parameters:
input_shape=(None,2), # 2 inputs
#hidden_num_units=200, # number of units in hidden layer
output_nonlinearity=None, # output layer uses identity function
output_num_units=1, # 1 target value
# optimization method:
update=nesterov_momentum,
update_learning_rate=0.01,
update_momentum=0.9,
regression=True, # flag to indicate we're dealing with regression problem
max_epochs=400, # we want to train this many epochs
verbose=1,
bias = None
)
根据 Lasagne Documentation for conv layers(密集层类似),您可以选择以下偏差选项:
b = None
至少根据 Lasagne 文档,任何层似乎都没有 "bias" 参数,而是使用 "b"。我不能代表 NoLearn,因为我不使用该软件包。
编辑:
这是一些千层面示例代码:
import lasagne
net = {}
net['input'] = lasagne.layers.InputLayer(shape=(None, 3, 224,224), input_var=None)
net['conv'] = lasagne.layers.Conv2DLayer(net['input'], num_filters=5, filter_size=3, b = None)
print net['conv'].get_params()
Returns:
[W]
单独,意味着没有偏差项。
对于 NoLearn 我不确定,因为我不使用那个包。
# Build the network yourself
inputs = InputLayer(shape=(None, 2))
network = DenseLayer(inputs, num_units=1, nonlinearity=None, b = None)
net1 = NeuralNet(
network,
#We don't need any of these parameters since we provided them above
# layer parameters:
#input_shape=(None,2), # 2 inputs
#hidden_num_units=200, # number of units in hidden layer
#output_nonlinearity=None, # output layer uses identity function
#output_num_units=1, # 1 target value
# optimization method:
update=nesterov_momentum,
update_learning_rate=0.01,
update_momentum=0.9,
regression=True, # flag to indicate we're dealing with regression problem
max_epochs=400, # we want to train this many epochs
verbose=1,
bias = None
)
我认为这应该可行。网络中可能有一个 kwarg 传递(我不记得了),但我认为如果没有给出任何参数,它默认是第一个参数。