如何在tensorflow中实现Theano.tensor.Lop?

How to implement Theano.tensor.Lop in tensorflow?

最近想在tensorflow中重写一些Theano的代码。但是遇到了不知道tensorflow中Lop算子怎么写的问题。下图是API关于Theano.tensor.Lop.

以下是Theano中的初始编码。

def svgd_gradient(X0):

    hidden, _, mse = discrim(X0)
    grad = -1.0 * T.grad( mse.sum(), X0)

    kxy, neighbors, h = rbf_kernel(hidden)  #TODO

    coff = T.exp( - T.sum((hidden[neighbors] - hidden)**2, axis=1) / h**2 / 2.0 )
    v = coff.dimshuffle(0, 'x') * (-hidden[neighbors] + hidden) / h**2

    X1 = X0[neighbors]
    hidden1, _, _ = discrim(X1)
    dxkxy = T.Lop(hidden1, X1, v)

    svgd_grad = grad + dxkxy / 2.
    return grad, svgd_grad, dxkxy 

这个方法我试过了,但是维度有问题

    def svgd_gradient(self, x0):
            hidden, _, mse = self.discriminator(x0)
            grad = -tf.gradients(tf.reduce_sum(mse), x0)

            kxy,neighbors, h = self.rbd_kernel(hidden)

            coff = tf.exp(-tf.reduce_sum((hidden[neighbors] - hidden)**2, axis=1) / h**2 / 2.0)
            v = tf.expand_dims(coff, axis=1) * (-hidden[neighbors] + hidden) / h**2

            x1 = x0[neighbors]
            hidden1, _, _ = self.discriminator(x1, reuse=True)
            dxkxy = self.Lop(hidden1, x1, v)

            svgd_grad = grad + dxkxy / 2
            return grad, svgd_grad, dxkxy

        def Lop(self, f, wrt, v):
            Lop = tf.multiply(tf.gradients(f, wrt), v)
            return Lop

你可以试试

def Lop(output, wrt, eval_points):
    grads = tf.gradients(output, wrt, grad_ys=eval_points)
    return grads

(信用:jhatford