将 Pycuda 与 PySpark 一起使用 - 未找到 nvcc
Using Pycuda with PySpark - nvcc not found
我的环境:
我在具有 Ubuntu 14.04 的 4 个 g2.2xlarge 实例的小型 AWS EC2 集群上使用 Hortonworks HDP 2.4 和 Spark 1.6.1。每个实例都有 CUDA 7.5、Anaconda Python 3.5 和 Pycuda 2016.1.1.
在 /etc/bash.bashrc 我已经设置:
CUDA_HOME=/usr/local/cuda
CUDA_ROOT=/usr/local/cuda
PATH=$PATH:/usr/local/cuda/bin
在所有 4 台机器上,我可以从 ubuntu 用户、root 用户和 yarn 用户的命令行访问 nvcc。
我的问题:
我有一个 Python-Pycuda 项目,我已经在 Spark 上适应了 运行。它 运行 在我 Mac 上的本地 Spark 安装上非常棒,但是当我 运行 在 AWS 上安装它时,我得到:
FileNotFoundError: [Errno 2] 没有那个文件或目录: 'nvcc'
因为它 运行 在本地模式下在我的 Mac 上,我猜测这是工作进程中 CUDA/Pycuda 的配置问题,但我真的很困惑它可能是什么。
有什么想法吗?
编辑:下面是失败作业之一的堆栈跟踪:
16/11/10 22:34:54 INFO ExecutorAllocationManager: Requesting 13 new executors because tasks are backlogged (new desired total will be 17)
16/11/10 22:34:57 INFO TaskSetManager: Starting task 16.0 in stage 2.0 (TID 34, ip-172-31-26-35.ec2.internal, partition 16,RACK_LOCAL, 2148 bytes)
16/11/10 22:34:57 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on ip-172-31-26-35.ec2.internal:54657 (size: 32.2 KB, free: 511.1 MB)
16/11/10 22:35:03 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 18, ip-172-31-26-35.ec2.internal): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pytools/prefork.py", line 46, in call_capture_output
popen = Popen(cmdline, cwd=cwd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
File "/home/ubuntu/anaconda3/lib/python3.5/subprocess.py", line 947, in __init__
restore_signals, start_new_session)
File "/home/ubuntu/anaconda3/lib/python3.5/subprocess.py", line 1551, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'nvcc'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/hadoop/yarn/local/usercache/ubuntu/appcache/application_1478814770538_0004/container_e40_1478814770538_0004_01_000009/pyspark.zip/pyspark/worker.py", line 111, in main
process()
File "/hadoop/yarn/local/usercache/ubuntu/appcache/application_1478814770538_0004/container_e40_1478814770538_0004_01_000009/pyspark.zip/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2346, in pipeline_func
File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2346, in pipeline_func
File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 317, in func
File "/home/ubuntu/pycuda-euler/src/cli_spark_gpu.py", line 36, in <lambda>
hail_mary = data.mapPartitions(lambda x: ec.assemble2(k, buffer=x, readLength = dataLength,readCount=dataCount)).saveAsTextFile('hdfs://172.31.26.32/genome/sra_output')
File "./eulercuda.zip/eulercuda/eulercuda.py", line 499, in assemble2
lmerLength, evList, eeList, levEdgeList, entEdgeList, readCount)
File "./eulercuda.zip/eulercuda/eulercuda.py", line 238, in constructDebruijnGraph
lmerCount, h_kmerKeys, h_kmerValues, kmerCount, numReads)
File "./eulercuda.zip/eulercuda/eulercuda.py", line 121, in readLmersKmersCuda
d_lmers = enc.encode_lmer_device(buffer, partitionReadCount, d_lmers, readLength, lmerLength)
File "./eulercuda.zip/eulercuda/pyencode.py", line 78, in encode_lmer_device
""")
File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 265, in __init__
arch, code, cache_dir, include_dirs)
File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 255, in compile
return compile_plain(source, options, keep, nvcc, cache_dir, target)
File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 78, in compile_plain
checksum.update(preprocess_source(source, options, nvcc).encode("utf-8"))
File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 50, in preprocess_source
result, stdout, stderr = call_capture_output(cmdline, error_on_nonzero=False)
File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pytools/prefork.py", line 197, in call_capture_output
return forker[0].call_capture_output(cmdline, cwd, error_on_nonzero)
File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pytools/prefork.py", line 54, in call_capture_output
% ( " ".join(cmdline), e))
pytools.prefork.ExecError: error invoking 'nvcc --preprocess -arch sm_30 -I/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/cuda /tmp/tmpkpqwoaxf.cu --compiler-options -P': [Errno 2] No such file or directory: 'nvcc'
at org.apache.spark.api.python.PythonRunner$$anon.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
为了解决这个问题,我终于解决了这个问题。
注意:我知道这对大多数人来说并不是一个好的或永久的答案,但就我而言,我是 运行 我论文的 POC 代码,尽快当我得到一些最终结果时,我正在停用服务器。我怀疑这个答案是否适合或适合大多数用户。
我最终将 nvcc 的完整路径硬编码到 Pycuda 的 compiler.py 文件中的 compile_plain() 中。
部分列表:
def compile_plain(source, options, keep, nvcc, cache_dir, target="cubin"):
from os.path import join
assert target in ["cubin", "ptx", "fatbin"]
nvcc = '/usr/local/cuda/bin/'+nvcc
if cache_dir:
checksum = _new_md5()
希望这能为其他人指明正确的方向。
该错误意味着 nvcc
不在 PATH
运行代码的进程中。
Amazon ECS Container Agent Configuration - Amazon EC2 Container Service 有关于如何为集群设置环境变量的说明。
对于 Hadoop 中的相同内容,有 Configuring Environment of Hadoop Daemons – Hadoop Cluster Setup。
我的环境: 我在具有 Ubuntu 14.04 的 4 个 g2.2xlarge 实例的小型 AWS EC2 集群上使用 Hortonworks HDP 2.4 和 Spark 1.6.1。每个实例都有 CUDA 7.5、Anaconda Python 3.5 和 Pycuda 2016.1.1.
在 /etc/bash.bashrc 我已经设置:
CUDA_HOME=/usr/local/cuda
CUDA_ROOT=/usr/local/cuda
PATH=$PATH:/usr/local/cuda/bin
在所有 4 台机器上,我可以从 ubuntu 用户、root 用户和 yarn 用户的命令行访问 nvcc。
我的问题: 我有一个 Python-Pycuda 项目,我已经在 Spark 上适应了 运行。它 运行 在我 Mac 上的本地 Spark 安装上非常棒,但是当我 运行 在 AWS 上安装它时,我得到:
FileNotFoundError: [Errno 2] 没有那个文件或目录: 'nvcc'
因为它 运行 在本地模式下在我的 Mac 上,我猜测这是工作进程中 CUDA/Pycuda 的配置问题,但我真的很困惑它可能是什么。
有什么想法吗?
编辑:下面是失败作业之一的堆栈跟踪:
16/11/10 22:34:54 INFO ExecutorAllocationManager: Requesting 13 new executors because tasks are backlogged (new desired total will be 17)
16/11/10 22:34:57 INFO TaskSetManager: Starting task 16.0 in stage 2.0 (TID 34, ip-172-31-26-35.ec2.internal, partition 16,RACK_LOCAL, 2148 bytes)
16/11/10 22:34:57 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on ip-172-31-26-35.ec2.internal:54657 (size: 32.2 KB, free: 511.1 MB)
16/11/10 22:35:03 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 18, ip-172-31-26-35.ec2.internal): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pytools/prefork.py", line 46, in call_capture_output
popen = Popen(cmdline, cwd=cwd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
File "/home/ubuntu/anaconda3/lib/python3.5/subprocess.py", line 947, in __init__
restore_signals, start_new_session)
File "/home/ubuntu/anaconda3/lib/python3.5/subprocess.py", line 1551, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'nvcc'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/hadoop/yarn/local/usercache/ubuntu/appcache/application_1478814770538_0004/container_e40_1478814770538_0004_01_000009/pyspark.zip/pyspark/worker.py", line 111, in main
process()
File "/hadoop/yarn/local/usercache/ubuntu/appcache/application_1478814770538_0004/container_e40_1478814770538_0004_01_000009/pyspark.zip/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2346, in pipeline_func
File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2346, in pipeline_func
File "/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 317, in func
File "/home/ubuntu/pycuda-euler/src/cli_spark_gpu.py", line 36, in <lambda>
hail_mary = data.mapPartitions(lambda x: ec.assemble2(k, buffer=x, readLength = dataLength,readCount=dataCount)).saveAsTextFile('hdfs://172.31.26.32/genome/sra_output')
File "./eulercuda.zip/eulercuda/eulercuda.py", line 499, in assemble2
lmerLength, evList, eeList, levEdgeList, entEdgeList, readCount)
File "./eulercuda.zip/eulercuda/eulercuda.py", line 238, in constructDebruijnGraph
lmerCount, h_kmerKeys, h_kmerValues, kmerCount, numReads)
File "./eulercuda.zip/eulercuda/eulercuda.py", line 121, in readLmersKmersCuda
d_lmers = enc.encode_lmer_device(buffer, partitionReadCount, d_lmers, readLength, lmerLength)
File "./eulercuda.zip/eulercuda/pyencode.py", line 78, in encode_lmer_device
""")
File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 265, in __init__
arch, code, cache_dir, include_dirs)
File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 255, in compile
return compile_plain(source, options, keep, nvcc, cache_dir, target)
File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 78, in compile_plain
checksum.update(preprocess_source(source, options, nvcc).encode("utf-8"))
File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/compiler.py", line 50, in preprocess_source
result, stdout, stderr = call_capture_output(cmdline, error_on_nonzero=False)
File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pytools/prefork.py", line 197, in call_capture_output
return forker[0].call_capture_output(cmdline, cwd, error_on_nonzero)
File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/pytools/prefork.py", line 54, in call_capture_output
% ( " ".join(cmdline), e))
pytools.prefork.ExecError: error invoking 'nvcc --preprocess -arch sm_30 -I/home/ubuntu/anaconda3/lib/python3.5/site-packages/pycuda/cuda /tmp/tmpkpqwoaxf.cu --compiler-options -P': [Errno 2] No such file or directory: 'nvcc'
at org.apache.spark.api.python.PythonRunner$$anon.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
为了解决这个问题,我终于解决了这个问题。
注意:我知道这对大多数人来说并不是一个好的或永久的答案,但就我而言,我是 运行 我论文的 POC 代码,尽快当我得到一些最终结果时,我正在停用服务器。我怀疑这个答案是否适合或适合大多数用户。
我最终将 nvcc 的完整路径硬编码到 Pycuda 的 compiler.py 文件中的 compile_plain() 中。
部分列表:
def compile_plain(source, options, keep, nvcc, cache_dir, target="cubin"):
from os.path import join
assert target in ["cubin", "ptx", "fatbin"]
nvcc = '/usr/local/cuda/bin/'+nvcc
if cache_dir:
checksum = _new_md5()
希望这能为其他人指明正确的方向。
该错误意味着 nvcc
不在 PATH
运行代码的进程中。
Amazon ECS Container Agent Configuration - Amazon EC2 Container Service 有关于如何为集群设置环境变量的说明。
对于 Hadoop 中的相同内容,有 Configuring Environment of Hadoop Daemons – Hadoop Cluster Setup。