分布式 Tensorflow 错误/

Distributed Tensorflow Errors/

当 运行 分布式张量流 (TF v0.9.0rc0) 设置时,我启动了 3 个参数服务器,然后是 6 个工人。参数服务器似乎没问题,给出消息 Started server with target: grpc://localhost:2222。但是工人们给出了我有疑问的其他错误(如下)。

在我看来,有时计算机无法相互通信,从而出现 socket error, connection refused 错误。似乎工作人员在初始化变量时无法找到参数服务器并给出 Cannot assign a device 错误。

任何人都可以帮助我理解这些错误分别意味着什么,每个错误有多大意义,并在需要时指导我如何修复这些错误?

具体来说:

  1. 为什么我得到 socket errors
  2. 为什么会有 Master init: Unavailable 个问题/它们是什么意思?
  3. 如何确保请求的设备可用?
  4. 这看起来像我应该 post 到 tensorflow github account 的问题页面吗?

设置注意事项:


他们都给出了这个错误(ip地址更改):

E0719 12:06:17.711635677    2543 tcp_client_posix.c:173]  
 failed to connect to 'ipv4:192.168.xx.xx:2222': socket error: connection refused

但所有的非主要工人也给予:

E tensorflow/core/distributed_runtime/master.cc:202] Master init: Unavailable: 

此外,一些非首席工人崩溃,给出了这个错误:

Traceback (most recent call last):  
    File "main.py", line 219, in <module>  
        r.main()  
    File "main.py", line 119, in main  
        with sv.prepare_or_wait_for_session(server.target, config=tf.ConfigProto(gpu_options=gpu_options)) as sess:  
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/supervisor.py", line 691, in prepare_or_wait_for_sessionn max_wait_secs=max_wait_secs)  
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/session_manager.py", line 282, in wait_for_session  
        sess.run([self._local_init_op])  
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 372, in run
        run_metadata_ptr)  
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 636, in _run  
        feed_dict_string, options, run_metadata)  
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 708, in _do_run  
        target_list, options, run_metadata)  
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 728, in _do_call  
        raise type(e)(node_def, op, message)  
    tensorflow.python.framework.errors.InvalidArgumentError: Cannot assign a device to node 'save/restore_slice_23':
        Could not satisfy explicit device specification '/job:ps/task:3/device:CPU:0'
        because no devices matching that specification are registered in this process; available devices: 
            /job:ps/replica:0/task:0/cpu:0,
            /job:ps/replica:0/task:1/cpu:0,
            /job:ps/replica:0/task:2/cpu:0,
            /job:ps/replica:0/task:4/cpu:0,
            /job:worker/replica:0/task:0/cpu:0,
            /job:worker/replica:0/task:0/gpu:0,
            /job:worker/replica:0/task:1/cpu:0,
            /job:worker/replica:0/task:1/gpu:0,
            /job:worker/replica:0/task:2/cpu:0,
            /job:worker/replica:0/task:2/gpu:0 
[[Node: save/restore_slice_23 = RestoreSlice[dt=DT_FLOAT, preferred_shard=-1, _device="/job:ps/task:3/device:CPU:0"](save/Const, save/restore_slice_23/tensor_name, save/restore_slice_23/shape_and_slice)]]
Caused by op u'save/restore_slice_23', defined at:  
    File "main.py", line 219, in <module>  
        r.main()  
    File "main.py", line 101, in main  
        saver = tf.train.Saver()  
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 845, in __init__  
        restore_sequentially=restore_sequentially)  
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 515, in build  
        filename_tensor, vars_to_save, restore_sequentially, reshape)  
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 271, in _AddRestoreOps  
        values = self.restore_op(filename_tensor, vs, preferred_shard)  
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 186, in restore_op
        preferred_shard=preferred_shard)  
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/io_ops.py", line 202, in _restore_slice  
        preferred_shard, name=name)  
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 358, in _restore_slice  
        preferred_shard=preferred_shard, name=name)  
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 704, in apply_op  
        op_def=op_def)  
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2260, in create_op  
        original_op=self._default_original_op, op_def=op_def)  
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1230, in __init__  
        self._traceback = _extract_stack()

我明白我的问题是什么了。

TL;DR: 酋长需要知道 all 变量以便初始化它们 all。非总工无法创建自己的变量。

我正在转换一个旧程序,其中所有工作人员都有一些自变量,但需要共享一些变量(我使用 ZMQ 传递这些变量)到分布式 TensorFlow 设置,但忘记初始化所有变量所有的工人。我有类似

# Create worker specific variable
with tf.variable_scope("world_{}".format(**worker_id**)):
    w1 = tf.get_variable("weight", shape=(input_dim, hidden_dim), dtype=tf.float32, initializer=tf.truncated_normal_initializer())

而不是做这样的事情:

# Create all worker specific variables
all_w1 = {}
for worker in worker_cnt:
    with tf.variable_scope("world_{}".format(**worker_id**)):  
        all_w1[worker] = tf.get_variable("weight", shape=(input_dim, hidden_dim), dtype=tf.float32, initializer=tf.truncated_normal_initializer())

# grab worker specific variable
w1 = all_w1[**worker_id**] 

至于错误...

我怀疑这导致一些工人死于上面的 Master init: Unavailable: 错误消息,因为负责人从来不知道工人想要创建的变量。

对于为什么设备不可用(第三次)错误没有找到那个设备,我没有一个可靠的解释,但我认为它又是,因为只有主人才能创造那个,而他不知道新变量。

第一个错误似乎是因为计算机在发生故障后还没有准备好通话,因为我在修复后没有看到该错误。如果我杀了一个工人并重新启动他,我仍然看到它,但如果他们一起启动,这似乎不是问题。


无论如何,如果有人以后遇到同样的错误,我希望这对您有所帮助。