如何使用 deeplearning-platform-release VM 映像获取 NVIDIA 驱动程序?
How to get NVIDIA driver using a deeplearning-platform-release VM image?
我正在 运行 解决我需要安装 NVIDIA 驱动程序的问题。
我最初基于此创建了一个计算引擎 VM:
export IMAGE_FAMILY="pytorch-latest-cu100"
export ZONE="us-west1-b"
export INSTANCE_NAME="my-instance"
gcloud compute instances create $INSTANCE_NAME \
--zone=$ZONE \
--image-family=$IMAGE_FAMILY \
--image-project=deeplearning-platform-release \
--maintenance-policy=TERMINATE \
--accelerator="type=nvidia-tesla-v100,count=1" \
--metadata="install-nvidia-driver=True"`
我部署在此 VM 上的代码运行良好。现在我需要在其上创建一个 REST API 层,因此根据 this,我需要使用 docker 将应用程序容器化。
我尝试从以下来源构建 docker 图像:
gcr.io/deeplearning-platform-release/pytorch-latest-cu100
(来自上面的命令)但似乎此图像不存在。
然后我尝试从以下来源构建另一个图像:
gcr.io/deeplearning-platform-release/pytorch-gpu.1-1
但是现在当我 运行 我的代码时,我得到以下错误:
Traceback (most recent call last):
File "model.py", line 297, in run
data = main(filepath)
File "model.py", line 52, in main
model = model.cuda()
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 260, in cuda
return self._apply(lambda t: t.cuda(device))
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 187, in _apply
module._apply(fn)
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 187, in _apply
module._apply(fn)
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 187, in _apply
module._apply(fn)
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 193, in _apply
param.data = fn(param.data)
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 260, in <lambda>
return self._apply(lambda t: t.cuda(device))
File "/root/miniconda3/lib/python3.7/site-
packages/torch/cuda/__init__.py", line 161, in _lazy_init
_check_driver()
File "/root/miniconda3/lib/python3.7/site-
packages/torch/cuda/__init__.py", line 82, in _check_driver
http://www.nvidia.com/Download/index.aspx""")
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx
我的 Dockerfile:
FROM gcr.io/deeplearning-platform-release/pytorch-gpu.1-1
WORKDIR /app
COPY requirements.txt /app
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 8080
COPY . /app/
CMD [ "python","main.py" ]
我的main.py:
from flask import Flask, request
import model
app = Flask(__name__)
@app.route('/getduration', methods=['POST'])
def get_duration():
try:
data = request.args.get('param')
except:
data = None
try:
duration = model.run(data)
return duration, 200
except Exception as e:
error = f"There was an error: {e}"
return error, 500
if __name__ == '__main__':
app.run(host='127.0.0.1', port=8080, debug=True)
如何更新我的 Dockerfile 以便我可以使用 Nvidia 驱动程序?
您在使用 NVIDIA Docker 吗?如果没有,那可能是你的问题。完全按照 docker
的方式使用 nvidia-docker
,这将使 NVIDIA 驱动程序在您的容器中可用。
我正在 运行 解决我需要安装 NVIDIA 驱动程序的问题。
我最初基于此创建了一个计算引擎 VM:
export IMAGE_FAMILY="pytorch-latest-cu100"
export ZONE="us-west1-b"
export INSTANCE_NAME="my-instance"
gcloud compute instances create $INSTANCE_NAME \
--zone=$ZONE \
--image-family=$IMAGE_FAMILY \
--image-project=deeplearning-platform-release \
--maintenance-policy=TERMINATE \
--accelerator="type=nvidia-tesla-v100,count=1" \
--metadata="install-nvidia-driver=True"`
我部署在此 VM 上的代码运行良好。现在我需要在其上创建一个 REST API 层,因此根据 this,我需要使用 docker 将应用程序容器化。
我尝试从以下来源构建 docker 图像: gcr.io/deeplearning-platform-release/pytorch-latest-cu100 (来自上面的命令)但似乎此图像不存在。
然后我尝试从以下来源构建另一个图像: gcr.io/deeplearning-platform-release/pytorch-gpu.1-1
但是现在当我 运行 我的代码时,我得到以下错误:
Traceback (most recent call last):
File "model.py", line 297, in run
data = main(filepath)
File "model.py", line 52, in main
model = model.cuda()
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 260, in cuda
return self._apply(lambda t: t.cuda(device))
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 187, in _apply
module._apply(fn)
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 187, in _apply
module._apply(fn)
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 187, in _apply
module._apply(fn)
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 193, in _apply
param.data = fn(param.data)
File "/root/miniconda3/lib/python3.7/site-
packages/torch/nn/modules/module.py", line 260, in <lambda>
return self._apply(lambda t: t.cuda(device))
File "/root/miniconda3/lib/python3.7/site-
packages/torch/cuda/__init__.py", line 161, in _lazy_init
_check_driver()
File "/root/miniconda3/lib/python3.7/site-
packages/torch/cuda/__init__.py", line 82, in _check_driver
http://www.nvidia.com/Download/index.aspx""")
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx
我的 Dockerfile:
FROM gcr.io/deeplearning-platform-release/pytorch-gpu.1-1
WORKDIR /app
COPY requirements.txt /app
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 8080
COPY . /app/
CMD [ "python","main.py" ]
我的main.py:
from flask import Flask, request
import model
app = Flask(__name__)
@app.route('/getduration', methods=['POST'])
def get_duration():
try:
data = request.args.get('param')
except:
data = None
try:
duration = model.run(data)
return duration, 200
except Exception as e:
error = f"There was an error: {e}"
return error, 500
if __name__ == '__main__':
app.run(host='127.0.0.1', port=8080, debug=True)
如何更新我的 Dockerfile 以便我可以使用 Nvidia 驱动程序?
您在使用 NVIDIA Docker 吗?如果没有,那可能是你的问题。完全按照 docker
的方式使用 nvidia-docker
,这将使 NVIDIA 驱动程序在您的容器中可用。