在 TensorFlow 中实现复数排列
Implementing Permutation of Complex Numbers In TensorFlow
在这篇关联 lstm 论文中,http://arxiv.org/abs/1602.03032,他们要求置换一个复数张量。
他们在这里提供了代码:https://github.com/mohammadpz/Associative_LSTM/blob/master/bricks.py#L79
我正在尝试在 tensorflow 中复制它。这是我所做的:
# shape: C x F/2
# output = self.permutations: [num_copies x cell_size]
permutations = []
indices = numpy.arange(self._dim / 2) #[1 ,2 ,3 ...64]
for i in range(self._num_copies):
numpy.random.shuffle(indices) #[4, 48, 32, ...64]
permutations.append(numpy.concatenate(
[indices,
[ind + self._dim / 2 for ind in indices]]))
#you're appending a row with two columns -- a permutation in the first column, and the same permutation + dim/2 for imaginary
# C x F (numpy)
self.permutations = tf.constant(numpy.vstack(permutations), dtype = tf.int32) #This is a permutation tensor that has the stored permutations
# output = self.permutations: [num_copies x cell_size]
def permute(complex_tensor): #complex tensor is [batch_size x cell_size]
gather_tensor = tf.gather_nd(complex_tensor, self.permutations)
return gather_tensor
基本上,我的问题是:这在 TensorFlow 中的效率如何?无论如何要保持 complex tensor
的批量大小固定?
此外,gather_nd
是解决此问题的最佳方法吗?还是做一个 for 循环并使用 tf.gather
遍历 self.permutations
中的每一行更好?
def permute(self, complex_tensor):
inputs_permuted = []
for i in range(self.permutations.get_shape()[0].value):
inputs_permuted.append(
tf.gather(complex_tensor, self.permutations[i]))
return tf.concat(0, inputs_permuted)
我认为 gather_nd
会更有效率。
没关系,我想通了,诀窍是使用 tf 转置置换原始输入张量。这将允许您对整个矩阵执行 tf.gather。然后你可以 tf 将矩阵连接在一起。抱歉,如果这浪费了任何人的时间。
在这篇关联 lstm 论文中,http://arxiv.org/abs/1602.03032,他们要求置换一个复数张量。
他们在这里提供了代码:https://github.com/mohammadpz/Associative_LSTM/blob/master/bricks.py#L79
我正在尝试在 tensorflow 中复制它。这是我所做的:
# shape: C x F/2
# output = self.permutations: [num_copies x cell_size]
permutations = []
indices = numpy.arange(self._dim / 2) #[1 ,2 ,3 ...64]
for i in range(self._num_copies):
numpy.random.shuffle(indices) #[4, 48, 32, ...64]
permutations.append(numpy.concatenate(
[indices,
[ind + self._dim / 2 for ind in indices]]))
#you're appending a row with two columns -- a permutation in the first column, and the same permutation + dim/2 for imaginary
# C x F (numpy)
self.permutations = tf.constant(numpy.vstack(permutations), dtype = tf.int32) #This is a permutation tensor that has the stored permutations
# output = self.permutations: [num_copies x cell_size]
def permute(complex_tensor): #complex tensor is [batch_size x cell_size]
gather_tensor = tf.gather_nd(complex_tensor, self.permutations)
return gather_tensor
基本上,我的问题是:这在 TensorFlow 中的效率如何?无论如何要保持 complex tensor
的批量大小固定?
此外,gather_nd
是解决此问题的最佳方法吗?还是做一个 for 循环并使用 tf.gather
遍历 self.permutations
中的每一行更好?
def permute(self, complex_tensor):
inputs_permuted = []
for i in range(self.permutations.get_shape()[0].value):
inputs_permuted.append(
tf.gather(complex_tensor, self.permutations[i]))
return tf.concat(0, inputs_permuted)
我认为 gather_nd
会更有效率。
没关系,我想通了,诀窍是使用 tf 转置置换原始输入张量。这将允许您对整个矩阵执行 tf.gather。然后你可以 tf 将矩阵连接在一起。抱歉,如果这浪费了任何人的时间。