最小化张量流中复值网络的损失

Minimizing loss in a complex-valued network in tensorflow

目前我正在尝试训练一个同时具有复值张量作为输入和输出的网络。作为损失函数,我取输出与真实值之间逐点差异的范数。
当我尝试最小化损失函数时,tensorflow 的 'minimize' 函数抱怨意外的复数。我觉得这很奇怪,因为我希望 tensorflow 能够处理复数的反向传播。另外,我明确地检查了损失值确实是一个实值张量。
我被卡住的原因是错误发生在 tensorflows 代码深处并且似乎是基于梯度的类型。在这里,我发现很难看出幕后到底发生了什么以及这些梯度计算应该如何发生。谁能帮我弄清楚应该如何使用 tensorflow 训练复杂网络?

这是一个最小的独立代码示例。它只有一个复杂的全连接层,包含最小化函数之前的所有代码,下面是我得到的相应错误消息:

import tensorflow as tf

def do_training():
    # Create placeholders for potential training-data/labels
    train_data_node = tf.placeholder(tf.complex64,
                                     shape=(25, 10),
                                     name="train_data_node")

    train_labels_node = tf.placeholder(tf.complex64,
                                       shape=(25, 10),
                                       name="train_labels_node")

    # create and initialise the weights
    weights = {
        'fc_w1': tf.Variable(tf.complex( tf.random_normal([10, 10], stddev=0.01, dtype =  tf.float32),
                                         tf.random_normal([10, 10], stddev=0.01, dtype =  tf.float32))),
        'fc_b1': tf.Variable(tf.complex( tf.random_normal([10]), tf.random_normal([10]))),
        }

    prediction = model(train_data_node, weights)
    loss = tf.real(tf.norm(prediction - train_labels_node))

    train_op = tf.train.AdamOptimizer(learning_rate=1.0).minimize(loss)

def model(data, weights):
    l1 = tf.matmul(data, weights['fc_w1'])                                                        # FC
    l1 = l1 + weights['fc_b1']
    return l1

错误信息:

Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/myFolder/training.py", line 23, in do_training
train_op = tf.train.AdamOptimizer(learning_rate=1.0).minimize(loss)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 315, in minimize
grad_loss=grad_loss)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 392, in compute_gradients
if g is not None and v.dtype != dtypes.resource])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 517, in _assert_valid_dtypes
dtype, t.name, [v for v in valid_dtypes]))
ValueError: Invalid type tf.complex64 for Variable:0, expected: [tf.float32, tf.float64, tf.float16].

编辑: 我尝试用实值权重替换复杂的权重。这需要先将这些权重转换为复数值,然后再将它们乘以全连接层。这行得通,所以我目前的假设是张量流不支持复杂权重的梯度计算。谁能证实这一点?

您已经确认了您的错误。同样来自 source code 函数 _assert_valid_dtypes 使用

  def _valid_dtypes(self):
    """Valid types for loss, variables and gradients.
    Subclasses should override to allow other float types.
    Returns:
      Valid types for loss, variables and gradients.
    """
    return set([dtypes.float16, dtypes.float32, dtypes.float64])

这正是错误告诉您的内容。

这不是 TF 无法正确处理复数值的唯一地方。连tf.reduce_prod这样的计算也有问题