如何解决 Paddle v0.8.0b 上的 "cudaSuccess = err (0 vs. 8)" 错误?
How to resolve "cudaSuccess = err (0 vs. 8)" error on Paddle v0.8.0b?
我已经安装了paddlepaddle
using the .deb
file from https://github.com/baidu/Paddle/releases/download/V0.8.0b1/paddle-gpu-0.8.0b1-Linux.deb
我在一台有 4 GTX 1080 的机器上安装了 CUDA 8.0 和 cudnn v5.1 而没有 NVIDIA Accelerated Graphics Driver
:
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Sun_Sep__4_22:14:01_CDT_2016
Cuda compilation tools, release 8.0, V8.0.44
我已经设置了 shell 个变量:
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64"
export CUDA_HOME=/usr/local/cuda
所有 cuda
工作正常,因为我有 运行 所有 NVIDIA_CUDA-8.0_Samples
而他们 "PASSED" 所有测试。
Paddle/demo/quick_start
中的 quick_start
演示代码也 运行 顺利并且没有抛出错误。
但是当我尝试 运行 来自 Paddle github 存储库的 image_classification
演示时,我收到 invalid device function
错误。有什么办法可以解决这个问题吗?
hl_gpu_matrix_kernel.cuh:181] Check failed: cudaSuccess == err (0 vs. 8) [hl_gpu_apply_unary_op failed] CUDA error: invalid device function
完整的回溯:
~/Paddle/demo/image_classification$ bash train.sh
I1005 14:34:51.929863 10461 Util.cpp:151] commandline: /home/ltan/Paddle/binary/bin/../opt/paddle/bin/paddle_trainer --config=vgg_16_cifar.py --dot_period=10 --log_period=100 --test_all_data_in_one_period=1 --use_gpu=1 --trainer_count=1 --num_passes=200 --save_dir=./cifar_vgg_model
I1005 14:34:56.705898 10461 Util.cpp:126] Calling runInitFunctions
I1005 14:34:56.706171 10461 Util.cpp:139] Call runInitFunctions done.
[INFO 2016-10-05 14:34:56,918 layers.py:1620] channels=3 size=3072
[INFO 2016-10-05 14:34:56,919 layers.py:1620] output size for __conv_0__ is 32
[INFO 2016-10-05 14:34:56,920 layers.py:1620] channels=64 size=65536
[INFO 2016-10-05 14:34:56,920 layers.py:1620] output size for __conv_1__ is 32
[INFO 2016-10-05 14:34:56,922 layers.py:1681] output size for __pool_0__ is 16*16
[INFO 2016-10-05 14:34:56,923 layers.py:1620] channels=64 size=16384
[INFO 2016-10-05 14:34:56,923 layers.py:1620] output size for __conv_2__ is 16
[INFO 2016-10-05 14:34:56,924 layers.py:1620] channels=128 size=32768
[INFO 2016-10-05 14:34:56,925 layers.py:1620] output size for __conv_3__ is 16
[INFO 2016-10-05 14:34:56,926 layers.py:1681] output size for __pool_1__ is 8*8
[INFO 2016-10-05 14:34:56,927 layers.py:1620] channels=128 size=8192
[INFO 2016-10-05 14:34:56,927 layers.py:1620] output size for __conv_4__ is 8
[INFO 2016-10-05 14:34:56,928 layers.py:1620] channels=256 size=16384
[INFO 2016-10-05 14:34:56,929 layers.py:1620] output size for __conv_5__ is 8
[INFO 2016-10-05 14:34:56,930 layers.py:1620] channels=256 size=16384
[INFO 2016-10-05 14:34:56,930 layers.py:1620] output size for __conv_6__ is 8
[INFO 2016-10-05 14:34:56,932 layers.py:1681] output size for __pool_2__ is 4*4
[INFO 2016-10-05 14:34:56,932 layers.py:1620] channels=256 size=4096
[INFO 2016-10-05 14:34:56,933 layers.py:1620] output size for __conv_7__ is 4
[INFO 2016-10-05 14:34:56,934 layers.py:1620] channels=512 size=8192
[INFO 2016-10-05 14:34:56,934 layers.py:1620] output size for __conv_8__ is 4
[INFO 2016-10-05 14:34:56,936 layers.py:1620] channels=512 size=8192
[INFO 2016-10-05 14:34:56,936 layers.py:1620] output size for __conv_9__ is 4
[INFO 2016-10-05 14:34:56,938 layers.py:1681] output size for __pool_3__ is 2*2
[INFO 2016-10-05 14:34:56,938 layers.py:1681] output size for __pool_4__ is 1*1
[INFO 2016-10-05 14:34:56,941 networks.py:1125] The input order is [image, label]
[INFO 2016-10-05 14:34:56,941 networks.py:1132] The output order is [__cost_0__]
I1005 14:34:56.948256 10461 Trainer.cpp:170] trainer mode: Normal
F1005 14:34:56.949136 10461 hl_gpu_matrix_kernel.cuh:181] Check failed: cudaSuccess == err (0 vs. 8) [hl_gpu_apply_unary_op failed] CUDA error: invalid device function
*** Check failure stack trace: ***
@ 0x7fa557316daa (unknown)
@ 0x7fa557316ce4 (unknown)
@ 0x7fa5573166e6 (unknown)
@ 0x7fa557319687 (unknown)
@ 0x78a939 hl_gpu_apply_unary_op<>()
@ 0x7536bf paddle::BaseMatrixT<>::applyUnary<>()
@ 0x7532a9 paddle::BaseMatrixT<>::applyUnary<>()
@ 0x73d82f paddle::BaseMatrixT<>::zero()
@ 0x66d2ae paddle::Parameter::enableType()
@ 0x669acc paddle::parameterInitNN()
@ 0x66bd13 paddle::NeuralNetwork::init()
@ 0x679ed3 paddle::GradientMachine::create()
@ 0x6a6355 paddle::TrainerInternal::init()
@ 0x6a2697 paddle::Trainer::init()
@ 0x53a1f5 main
@ 0x7fa556522f45 (unknown)
@ 0x545ae5 (unknown)
@ (nil) (unknown)
/home/xxx/Paddle/binary/bin/paddle: line 81: 10461 Aborted (core dumped) ${DEBUGGER} $MYDIR/../opt/paddle/bin/paddle_trainer ${@:2}
No data to plot. Exiting!
根据 git repo 的问题#158,这个问题应该在#170 中得到解决,并且支持 GTX 1080 和 CUDA 8.0,但在访问 GPU 功能时仍然会抛出错误。 (抱歉不能添加超过 2 个信誉低的链接)
有谁知道如何解决这个问题并安装它以便 image_classification
可以 运行?
我也试过从源代码编译+安装,在 quick_start
演示 运行 顺利进行时抛出了同样的错误。
我对球拍一无所知。但是,CUDA 错误几乎可以肯定是由您安装的二进制文件不包含您的(相当新的)GTX1080 代码引起的。要么找到支持 Pascal GPU 的版本,要么从源代码构建您自己的版本。
问题是因为在 Paddle/cmake/flags.cmake
中为 CUDA 8.0 的体系结构设置了标志。
已在 https://github.com/baidu/Paddle/pull/165/files 中通过添加 compute_52
、sm_52
和 compute_60
以及 sm_60
解决了
我已经安装了paddlepaddle
using the .deb
file from https://github.com/baidu/Paddle/releases/download/V0.8.0b1/paddle-gpu-0.8.0b1-Linux.deb
我在一台有 4 GTX 1080 的机器上安装了 CUDA 8.0 和 cudnn v5.1 而没有 NVIDIA Accelerated Graphics Driver
:
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Sun_Sep__4_22:14:01_CDT_2016
Cuda compilation tools, release 8.0, V8.0.44
我已经设置了 shell 个变量:
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64"
export CUDA_HOME=/usr/local/cuda
所有 cuda
工作正常,因为我有 运行 所有 NVIDIA_CUDA-8.0_Samples
而他们 "PASSED" 所有测试。
Paddle/demo/quick_start
中的 quick_start
演示代码也 运行 顺利并且没有抛出错误。
但是当我尝试 运行 来自 Paddle github 存储库的 image_classification
演示时,我收到 invalid device function
错误。有什么办法可以解决这个问题吗?
hl_gpu_matrix_kernel.cuh:181] Check failed: cudaSuccess == err (0 vs. 8) [hl_gpu_apply_unary_op failed] CUDA error: invalid device function
完整的回溯:
~/Paddle/demo/image_classification$ bash train.sh
I1005 14:34:51.929863 10461 Util.cpp:151] commandline: /home/ltan/Paddle/binary/bin/../opt/paddle/bin/paddle_trainer --config=vgg_16_cifar.py --dot_period=10 --log_period=100 --test_all_data_in_one_period=1 --use_gpu=1 --trainer_count=1 --num_passes=200 --save_dir=./cifar_vgg_model
I1005 14:34:56.705898 10461 Util.cpp:126] Calling runInitFunctions
I1005 14:34:56.706171 10461 Util.cpp:139] Call runInitFunctions done.
[INFO 2016-10-05 14:34:56,918 layers.py:1620] channels=3 size=3072
[INFO 2016-10-05 14:34:56,919 layers.py:1620] output size for __conv_0__ is 32
[INFO 2016-10-05 14:34:56,920 layers.py:1620] channels=64 size=65536
[INFO 2016-10-05 14:34:56,920 layers.py:1620] output size for __conv_1__ is 32
[INFO 2016-10-05 14:34:56,922 layers.py:1681] output size for __pool_0__ is 16*16
[INFO 2016-10-05 14:34:56,923 layers.py:1620] channels=64 size=16384
[INFO 2016-10-05 14:34:56,923 layers.py:1620] output size for __conv_2__ is 16
[INFO 2016-10-05 14:34:56,924 layers.py:1620] channels=128 size=32768
[INFO 2016-10-05 14:34:56,925 layers.py:1620] output size for __conv_3__ is 16
[INFO 2016-10-05 14:34:56,926 layers.py:1681] output size for __pool_1__ is 8*8
[INFO 2016-10-05 14:34:56,927 layers.py:1620] channels=128 size=8192
[INFO 2016-10-05 14:34:56,927 layers.py:1620] output size for __conv_4__ is 8
[INFO 2016-10-05 14:34:56,928 layers.py:1620] channels=256 size=16384
[INFO 2016-10-05 14:34:56,929 layers.py:1620] output size for __conv_5__ is 8
[INFO 2016-10-05 14:34:56,930 layers.py:1620] channels=256 size=16384
[INFO 2016-10-05 14:34:56,930 layers.py:1620] output size for __conv_6__ is 8
[INFO 2016-10-05 14:34:56,932 layers.py:1681] output size for __pool_2__ is 4*4
[INFO 2016-10-05 14:34:56,932 layers.py:1620] channels=256 size=4096
[INFO 2016-10-05 14:34:56,933 layers.py:1620] output size for __conv_7__ is 4
[INFO 2016-10-05 14:34:56,934 layers.py:1620] channels=512 size=8192
[INFO 2016-10-05 14:34:56,934 layers.py:1620] output size for __conv_8__ is 4
[INFO 2016-10-05 14:34:56,936 layers.py:1620] channels=512 size=8192
[INFO 2016-10-05 14:34:56,936 layers.py:1620] output size for __conv_9__ is 4
[INFO 2016-10-05 14:34:56,938 layers.py:1681] output size for __pool_3__ is 2*2
[INFO 2016-10-05 14:34:56,938 layers.py:1681] output size for __pool_4__ is 1*1
[INFO 2016-10-05 14:34:56,941 networks.py:1125] The input order is [image, label]
[INFO 2016-10-05 14:34:56,941 networks.py:1132] The output order is [__cost_0__]
I1005 14:34:56.948256 10461 Trainer.cpp:170] trainer mode: Normal
F1005 14:34:56.949136 10461 hl_gpu_matrix_kernel.cuh:181] Check failed: cudaSuccess == err (0 vs. 8) [hl_gpu_apply_unary_op failed] CUDA error: invalid device function
*** Check failure stack trace: ***
@ 0x7fa557316daa (unknown)
@ 0x7fa557316ce4 (unknown)
@ 0x7fa5573166e6 (unknown)
@ 0x7fa557319687 (unknown)
@ 0x78a939 hl_gpu_apply_unary_op<>()
@ 0x7536bf paddle::BaseMatrixT<>::applyUnary<>()
@ 0x7532a9 paddle::BaseMatrixT<>::applyUnary<>()
@ 0x73d82f paddle::BaseMatrixT<>::zero()
@ 0x66d2ae paddle::Parameter::enableType()
@ 0x669acc paddle::parameterInitNN()
@ 0x66bd13 paddle::NeuralNetwork::init()
@ 0x679ed3 paddle::GradientMachine::create()
@ 0x6a6355 paddle::TrainerInternal::init()
@ 0x6a2697 paddle::Trainer::init()
@ 0x53a1f5 main
@ 0x7fa556522f45 (unknown)
@ 0x545ae5 (unknown)
@ (nil) (unknown)
/home/xxx/Paddle/binary/bin/paddle: line 81: 10461 Aborted (core dumped) ${DEBUGGER} $MYDIR/../opt/paddle/bin/paddle_trainer ${@:2}
No data to plot. Exiting!
根据 git repo 的问题#158,这个问题应该在#170 中得到解决,并且支持 GTX 1080 和 CUDA 8.0,但在访问 GPU 功能时仍然会抛出错误。 (抱歉不能添加超过 2 个信誉低的链接)
有谁知道如何解决这个问题并安装它以便 image_classification
可以 运行?
我也试过从源代码编译+安装,在 quick_start
演示 运行 顺利进行时抛出了同样的错误。
我对球拍一无所知。但是,CUDA 错误几乎可以肯定是由您安装的二进制文件不包含您的(相当新的)GTX1080 代码引起的。要么找到支持 Pascal GPU 的版本,要么从源代码构建您自己的版本。
问题是因为在 Paddle/cmake/flags.cmake
中为 CUDA 8.0 的体系结构设置了标志。
已在 https://github.com/baidu/Paddle/pull/165/files 中通过添加 compute_52
、sm_52
和 compute_60
以及 sm_60