循环层的奇怪梯度结果

Weird gradient results with recurrent layers

我一直在试验非常基本的循环网络,并且看到了非常奇怪的行为。我花了相当多的时间试图缩小出错的范围,最后我发现在使用循环层时,theano 和有限微分计算的梯度完全不同。这是怎么回事?

这是我遇到的问题类型:

我有 n_seq 个 n_steps 维度 n_feat 特征向量的序列,以及它们在 n_class 类 中的标签。标签是每个时间步,而不是每个序列(所以我有 n_seq*n_steps 标签)。 我的目标是训练模型以正确分类特征向量。

这是我的最小示例:

(实际上,数据中会有一些顺序信息,因此循环网络应该做得更好,但在这个最小的例子中,我生成了纯随机数据,这足以暴露错误。)

我创建了 2 个最小网络:

1) 常规前馈(非循环),只有输入层和带 softmax 的输出层(无隐藏层)。我通过考虑 n_seq*n_steps "independent" 特征向量的 "batch" 来丢弃顺序信息。

2) 一个相同的网络,但输出层是循环的。该批处理现在的大小为 n_seq,每个输入都是 n_steps 特征向量的完整序列。最后,我将输出重新整形为大小为 n_seq*n_steps.

的 "batch"

如果循环权重设置为0,那么2个网络应该是等价的。事实上,我确实看到在这种情况下,两个网络的初始损失是相同的,无论我对前馈权重进行何种随机初始化。 如果我实施有限微分,我也会得到前馈权重的(初始)梯度是相同的(因为它们应该)。然而,从 theano 获得的梯度完全不同(但仅适用于循环网络)。

这是我的代码和示例结果:

注意:当 运行 第一次收到此警告时,我不知道是什么触发了它,但我敢打赌它与我的问题有关。 警告:在严格模式下,所有必需的共享变量必须作为 non_sequences 的一部分传递 'must be passed as a part of non_sequences',警告)

任何见解将不胜感激!

代码:

import numpy as np
import theano
import theano.tensor as T
import lasagne


# GENERATE RANDOM DATA
n_steps = 10**4
n_seq = 10
n_feat = 2
n_class = 2
data_X = lasagne.utils.floatX(np.random.randn(n_seq, n_steps, n_feat))
data_y = np.random.randint(n_class, size=(n_seq, n_steps))

# INITIALIZE WEIGHTS
# feed-forward weights (random)
W = theano.shared(lasagne.utils.floatX(np.random.randn(n_feat,n_class)), name="W")
# recurrent weights (set to 0)
W_rec = theano.shared(lasagne.utils.floatX(np.zeros((n_class,n_class))), name="Wrec")
# bias (set to 0)
b = theano.shared(lasagne.utils.floatX(np.zeros((n_class,))), name="b")



def create_functions(model, X, y, givens):
    """Helper for building a network."""
    loss = lasagne.objectives.categorical_crossentropy(lasagne.layers.get_output(model, X), y).mean()
    get_loss = theano.function(
        [], loss,
        givens=givens
    )
    all_params = lasagne.layers.get_all_params(model)
    get_theano_grad = [
        theano.function(
            [], g,
            givens=givens
        )
        for g in theano.grad(loss, all_params)
    ]
    return get_loss, get_theano_grad


def feedforward():
    """Creates a minimal feed-forward network."""
    l_in = lasagne.layers.InputLayer(
        shape=(n_seq*n_steps, n_feat),
    )
    l_out = lasagne.layers.DenseLayer(
        l_in,
        num_units=n_class,
        nonlinearity=lasagne.nonlinearities.softmax,
        W=W,
        b=b
    )
    model = l_out
    X = T.matrix('X')
    y = T.ivector('y')
    givens={
        X: theano.shared(data_X.reshape((n_seq*n_steps, n_feat))),
        y: T.cast(theano.shared(data_y.reshape((n_seq*n_steps,))), 'int32'),
    }
    return (model,) + create_functions(model, X, y, givens)


def recurrent():
    """Creates a minimal recurrent network."""
    l_in = lasagne.layers.InputLayer(
        shape=(n_seq, n_steps, n_feat),
    )
    l_out = lasagne.layers.RecurrentLayer(
        l_in,
        num_units=n_class,
        nonlinearity=lasagne.nonlinearities.softmax,
        gradient_steps=1,
        W_in_to_hid=W,
        W_hid_to_hid=W_rec,
        b=b,
    )
    l_reshape = lasagne.layers.ReshapeLayer(l_out, (n_seq*n_steps, n_class))
    model = l_reshape
    X = T.tensor3('X')
    y = T.ivector('y')
    givens={
        X: theano.shared(data_X),
        y: T.cast(theano.shared(data_y.reshape((n_seq*n_steps,))), 'int32'),
    }
    return (model,) + create_functions(model, X, y, givens)


def finite_diff(param, loss_func, epsilon):
    """Computes a finitie differentation gradient of loss_func wrt param.""" 
    loss = loss_func()
    P = param.get_value()
    grad = np.zeros_like(P)
    it = np.nditer(P , flags=['multi_index'])
    while not it.finished:
        ind = it.multi_index
        dP = P.copy()
        dP[ind] += epsilon
        param.set_value(dP)
        grad[ind] = (loss_func()-loss)/epsilon
        it.iternext()
    param.set_value(P)
    return grad


def theano_diff(net, get_theano_grad):
    for p,g in zip(lasagne.layers.get_all_params(net), get_theano_grad):
        if p.name == "W":
            gW = np.array(g())
        if p.name == "b":
            gb = np.array(g())
    return gW, gb


def compare_ff_rec():
    eps = 1e-3 # for finite differentiation
    ff, get_loss_ff, get_theano_grad_ff = feedforward()
    rec, get_loss_rec, get_theano_grad_rec = recurrent()
    gW_ff_finite = finite_diff(W, get_loss_ff, eps)
    gb_ff_finite = finite_diff(b, get_loss_ff, eps)
    gW_rec_finite = finite_diff(W, get_loss_rec, eps)
    gb_rec_finite = finite_diff(b, get_loss_rec, eps)
    gW_ff_theano, gb_ff_theano = theano_diff(ff, get_theano_grad_ff)
    gW_rec_theano, gb_rec_theano = theano_diff(rec, get_theano_grad_rec)
    print "\nloss:"
    print "FF:\t", get_loss_ff()
    print "REC:\t", get_loss_rec()
    print "\ngradients:"
    print "W"
    print "FF finite:\n", gW_ff_finite.ravel()
    print "FF theano:\n", gW_ff_theano.ravel()
    print "REC finite:\n", gW_rec_finite.ravel()
    print "REC theano:\n", gW_rec_theano.ravel()
    print "b"
    print "FF finite:\n", gb_ff_finite.ravel()
    print "FF theano:\n", gb_ff_theano.ravel()
    print "REC finite:\n", gb_rec_finite.ravel()
    print "REC theano:\n", gb_rec_theano.ravel()


compare_ff_rec()

结果:

loss:
FF:     0.968060314655
REC:    0.968060314655

gradients:
W
FF finite:
[ 0.23925304 -0.23907423  0.14013052 -0.14001131]
FF theano:
[ 0.23917811 -0.23917811  0.14011626 -0.14011627]
REC finite:
[ 0.23931265 -0.23907423  0.14024973 -0.14001131]
REC theano:
[  1.77408110e-05  -1.77408110e-05   1.21677476e-05  -1.21677458e-05]
b
FF finite:
[ 0.00065565 -0.00047684]
FF theano:
[ 0.00058145 -0.00058144]
REC finite:
[ 0.00071526 -0.00047684]
REC theano:
[  7.53380482e-06  -7.53380482e-06]

问题出在 BPTT 中 gradient_steps 剪裁的非直观(可能)效果,如下所述: https://groups.google.com/forum/#!topic/theano-users/QNge6fC6C4s