TPU突然停止训练

TPU suddenly stops training

我正尝试按照 official tutorial 中的说明,在 Google 云中使用 TPU 训练变压器模型。 加载数据工作正常,在 运行ning

之后
t2t-trainer \
  --model=transformer \
  --hparams_set=transformer_tpu \
  --problem=translate_ende_wmt32k_packed \
  --train_steps=500000 \
  --eval_steps=3000 \
  --data_dir=$DATA_DIR \
  --output_dir=$OUT_DIR \
  --use_tpu=True \
  --cloud_tpu_name=$TPU_NAME

训练确实按预期开始,输出可能看起来像这样:

I1118 14:48:18.978163 140580835792320 tpu_estimator.py:2307] global_step/sec: 15.2942                                                                                                                                                   [114/1944]
INFO:tensorflow:examples/sec: 978.827                                                                                             
I1118 14:48:18.978595 140580835792320 tpu_estimator.py:2308] examples/sec: 978.827                                                
INFO:tensorflow:Enqueue next (100) batch(es) of data to infeed.                                               
I1118 14:48:18.979720 140580835792320 tpu_estimator.py:600] Enqueue next (100) batch(es) of data to infeed.                       
INFO:tensorflow:Dequeue next (100) batch(es) of data from outfeed.                                                                
I1118 14:48:18.979935 140580835792320 tpu_estimator.py:604] Dequeue next (100) batch(es) of data from outfeed.
I1118 14:48:24.292932 140577566803712 transport.py:157] Attempting refresh to obtain initial access_token                         
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-8 in state READY, and health HEALTHY.                                         
W1118 14:48:24.353135 140577566803712 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-8 in state READY, and health HEALTHY.
INFO:tensorflow:loss = 1.8486812, step = 113800 (6.536 sec)                                                                       
I1118 14:48:25.512768 140580835792320 basic_session_run_hooks.py:260] loss = 1.8486812, step = 113800 (6.536 sec)                 
INFO:tensorflow:global_step/sec: 15.2986                                                                 
I1118 14:48:25.514695 140580835792320 tpu_estimator.py:2307] global_step/sec: 15.2986                                             
INFO:tensorflow:examples/sec: 979.11                                                                                              
I1118 14:48:25.515115 140580835792320 tpu_estimator.py:2308] examples/sec: 979.11                                
INFO:tensorflow:Enqueue next (100) batch(es) of data to infeed.                                                                   
I1118 14:48:25.516618 140580835792320 tpu_estimator.py:600] Enqueue next (100) batch(es) of data to infeed.                       
INFO:tensorflow:Dequeue next (100) batch(es) of data from outfeed.                                       
I1118 14:48:25.516829 140580835792320 tpu_estimator.py:604] Dequeue next (100) batch(es) of data from outfeed.                    
INFO:tensorflow:Outfeed finished for iteration (388, 47)                                                                          
I1118 14:48:28.761935 140577575196416 tpu_estimator.py:279] Outfeed finished for iteration (388, 47)       
INFO:tensorflow:loss = 1.5237397, step = 113900 (6.573 sec)                                                                       
I1118 14:48:32.086134 140580835792320 basic_session_run_hooks.py:260] loss = 1.5237397, step = 113900 (6.573 sec)

但是,有时,在经过不确定的迭代次数(有时少于 25k,有时超过 400k,有时永远不会)后,训练会突然停止。没有错误消息,但没有取得更多进展。在这种情况下,我得到以下输出:

I1120 13:40:33.828651 140684764419520 tpu_estimator.py:2307] global_step/sec: 16.3988
INFO:tensorflow:examples/sec: 1049.52
I1120 13:40:33.829339 140684764419520 tpu_estimator.py:2308] examples/sec: 1049.52
INFO:tensorflow:Enqueue next (1000) batch(es) of data to infeed.
I1120 13:40:33.830607 140684764419520 tpu_estimator.py:600] Enqueue next (1000) batch(es) of data to infeed.
INFO:tensorflow:Dequeue next (1000) batch(es) of data from outfeed.
I1120 13:40:33.830862 140684764419520 tpu_estimator.py:604] Dequeue next (1000) batch(es) of data from outfeed.
INFO:tensorflow:Outfeed finished for iteration (7, 0)
I1120 13:40:34.267921 140681504278272 tpu_estimator.py:279] Outfeed finished for iteration (7, 0)
I1120 13:40:39.989195 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:40:40.056418 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:41:10.124164 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:41:10.177670 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:41:40.259634 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:41:40.309398 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:42:10.377460 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health UNKNOWN.
W1120 13:42:10.431982 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health UNKNOWN.
I1120 13:42:40.508342 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:42:40.567739 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:43:10.638391 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:43:10.694900 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:43:40.763782 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:43:40.810777 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:44:10.889873 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:44:10.942733 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
I1120 13:44:41.011034 140681495885568 transport.py:157] Attempting refresh to obtain initial access_token
WARNING:tensorflow:TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.
W1120 13:44:41.066553 140681495885568 preempted_hook.py:91] TPUPollingThread found TPU tpuv3-5 in state READY, and health HEALTHY.

请注意,报告的运行状况为 UNKNOWN 一次,这可能与此问题有关,也可能无关。

要恢复训练,我必须停止进程并再次 运行 训练命令。然后它将加载最新的检查点并继续训练,直到它最终再次停止。

我正在 运行 在 tmux 会话中执行训练命令,所以这不应该是由我和 Google Cloud 之间的连接问题引起的。事实上,我可以完全关闭所有 windows,并从另一台 PC 连接到 运行ning 培训课程。

我看到了问题TPU training freezes in the middle of training,但我使用的是预定义模型,我的桶定义在同一区域(TPU 在us-central1-a,存储桶在us-central1 ).

编辑:如果这是相关的,我目前正在免费试用 1 个月,这是我通过申请 TensorFlow Research Cloud 项目获得的。也许那些集群节点不如付费节点稳定?

Edit2:也许这与 GitHub 问题 TPU dies after 3hrs (e.g. with no 'health' state) (and the follow up) 有关?请注意,该问题已关闭,但给出的答案似乎与该问题无关。此外,我已经检查了我的云 VM 中的文件 /usr/local/lib/python2.7/dist-packages/tensorflow_core/python/tpu/preempted_hook.py,并且两个链接的更改都已合并。

我在使用 TFRC 的 TPU 进行训练时遇到了同样的问题。正如警告所说,即使我们按照说明进行操作,TPU 和 Google Cloud 之间的连接似乎也存在问题。

我尝试了几种解决方案:

  • 删除 gcloud 配置文件夹

    rm -rf ~/.config/gcloud

  • 更新gcloud sdk:

    gcloud components update

  • 通过 IAM 授予 TPU 访问 Cloud Bucket 的权限 link!

TPU 挂起错误仍然会发生,但频率较低。希望它能对您的案例有所帮助,或者您可以找到通用的解决方案。

这被报告为 GitHub (#1, #2) 上的错误,随后得到修复。 如果还是报错,请回复第二个GitHub问题。请注意,您可能必须重新创建 TPU,仅重启它可能还不够。