如何部署支持 GPU 的 PaddlePaddle Docker 容器?

How to deploy a PaddlePaddle Docker container with GPU support?

documentation Docker deployment for PaddlePaddle as compared to the documentation to manually install PaddlePaddle from source 中存在细微差异。

从 Docker 中心拉取容器后 Docker 部署状态的文档:

docker pull paddledev/paddle

环境变量应设置并包含在docker run中,即:

export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')"
export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:gpu-latest

export 命令似乎在 /usr/lib64/ 中寻找 libcuda*libnvidia* 但在源代码编译的文档中, lib64/ 的位置应该在 /usr/local/cuda/lib64

无论如何,lib64/ 的位置可以通过以下方式找到:

cat /etc/ld.so.conf.d/cuda.conf

此外,导出命令正在查找 libnvidia*,它在 /usr/local/cuda/ 中似乎不存在,除了 libnvidia-ml.so:

/usr/local/cuda$ find . -name 'libnvidia*'
./lib64/stubs/libnvidia-ml.so

我想 CUDA_SO 正在寻找的正确文件是

但是是这样吗? CUDA_SO 部署支持 GPU 的 PaddlePaddle 的环境变量是什么?

即使设置了 libcudart* 变量,docker 容器似乎也找不到 GPU 驱动程序,即:

user0@server1:~/dockdock$ echo CUDA_SO="$(\ls $CUDA_CONFILE/libcuda* | xargs -I{} echo '-v {}:{}')"
CUDA_SO=-v /usr/local/cuda/lib64/libcudadevrt.a:/usr/local/cuda/lib64/libcudadevrt.a
-v /usr/local/cuda/lib64/libcudart.so:/usr/local/cuda/lib64/libcudart.so
-v /usr/local/cuda/lib64/libcudart.so.8.0:/usr/local/cuda/lib64/libcudart.so.8.0
-v /usr/local/cuda/lib64/libcudart.so.8.0.44:/usr/local/cuda/lib64/libcudart.so.8.0.44
-v /usr/local/cuda/lib64/libcudart_static.a:/usr/local/cuda/lib64/libcudart_static.a

user0@ server1:~/dockdock$ export CUDA_SO="$(\ls $CUDA_CONFILE/libcuda* | xargs -I{} echo '-v {}:{}')"

user0@ server1:~/dockdock$ export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')

user0@ server1:~/dockdock$ docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:gpu-latest

root@bd25dfd4f824:/# git clone https://github.com/baidu/Paddle paddle
Cloning into 'paddle'...
remote: Counting objects: 26626, done.
remote: Compressing objects: 100% (23/23), done.
remote: Total 26626 (delta 3), reused 0 (delta 0), pack-reused 26603
Receiving objects: 100% (26626/26626), 25.41 MiB | 4.02 MiB/s, done.
Resolving deltas: 100% (18786/18786), done.
Checking connectivity... done.

root@bd25dfd4f824:/# cd paddle/demo/quick_start/

root@bd25dfd4f824:/paddle/demo/quick_start# sed -i 's|--use_gpu=false|--use_gpu=true|g' train.sh 

root@bd25dfd4f824:/paddle/demo/quick_start# bash train.sh 
I0410 09:25:37.300365    48 Util.cpp:155] commandline: /usr/local/bin/../opt/paddle/bin/paddle_trainer --config=trainer_config.lr.py --save_dir=./output --trainer_count=4 --log_period=100 --num_passes=15 --use_gpu=true --show_parameter_stats_period=100 --test_all_data_in_one_period=1 
F0410 09:25:37.300940    48 hl_cuda_device.cc:526] Check failed: cudaSuccess == cudaStat (0 vs. 35) Cuda Error: CUDA driver version is insufficient for CUDA runtime version
*** Check failure stack trace: ***
    @     0x7efc20557daa  (unknown)
    @     0x7efc20557ce4  (unknown)
    @     0x7efc205576e6  (unknown)
    @     0x7efc2055a687  (unknown)
    @           0x895560  hl_specify_devices_start()
    @           0x89576d  hl_start()
    @           0x80f402  paddle::initMain()
    @           0x52ac5b  main
    @     0x7efc1f763f45  (unknown)
    @           0x540c05  (unknown)
    @              (nil)  (unknown)
/usr/local/bin/paddle: line 109:    48 Aborted                 (core dumped) ${DEBUGGER} $MYDIR/../opt/paddle/bin/paddle_trainer ${@:2}

  [1]: http://www.paddlepaddle.org/doc/build/docker_install.html
  [2]: http://paddlepaddle.org/doc/build/build_from_source.html

如何部署支持 GPU 的 PaddlePaddle Docker 容器?


另外,中文:https://github.com/PaddlePaddle/Paddle/issues/1764

请参考http://www.paddlepaddle.org/develop/doc/getstarted/build_and_install/docker_install_en.html

推荐的方法是使用nvidia-docker

请先安装 nvidia-docker,然后安装 tutorial

现在您可以 运行 GPU 图像:

docker pull paddlepaddle/paddle
nvidia-docker run -it --rm paddlepaddle/paddle:0.10.0rc2-gpu /bin/bash