无法从 dockerfile 在后台执行 运行 linux 命令?

Not able to run linux command in background from dockerfile?

这是我的 docker 文件,

FROM ubuntu:20.04

ARG DEBIAN_FRONTEND=noninteractive
RUN apt update && apt upgrade -y
RUN apt install -y -q software-properties-common
RUN apt install -y -q build-essential python3-pip python3-dev
RUN apt-get install -y gcc make apt-transport-https ca-certificates build-essential
RUN apt-get install -y curl autoconf automake libtool pkg-config git libreoffice wget
RUN apt-get install -y g++
RUN apt-get install -y autoconf automake libtool
RUN apt-get install -y pkg-config
RUN apt-get install -y libpng-dev
RUN apt-get install -y libjpeg8-dev
RUN apt-get install -y libtiff5-dev
RUN apt-get install -y zlib1g-dev
RUN apt-get install -y libleptonica-dev
RUN apt-get install -y libicu-dev libpango1.0-dev libcairo2-dev

# python dependencies
RUN pip3 install -U pip setuptools wheel
RUN pip3 install gunicorn uvloop httptools dvc[s3]
RUN pip3 install nltk
RUN python3 -c "import nltk;nltk.download('stopwords')" 

# copy required files
RUN bash -c 'mkdir -p /app/{app,models,requirements}'
COPY ./config.yaml /app
COPY ./models /app/models
COPY ./requirements /app/requirements
COPY ./app /app/app


# tensorflow serving for models
RUN echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | tee /etc/apt/sources.list.d/tensorflow-serving.list && \
    curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | apt-key add -
RUN apt-get update && apt-get install tensorflow-model-server
RUN tensorflow_model_server --port=8500 --rest_api_port=8501 --model_config_file=/app/models/model.conf --model_base_path=/app/models &

ENTRYPOINT /usr/local/bin/gunicorn \
    -b 0.0.0.0:80 \
    -w 1 \
    -k uvicorn.workers.UvicornWorker app.main:app \
    --timeout 120 \
    --chdir /app \
    --log-level 'info' \
    --error-logfile '-'\
    --access-logfile '-'

无论我做什么,下面这行都不会执行 while 运行ning docker image,

运行 tensorflow_model_server --port=8500 --rest_api_port=8501 --model_config_file=/app/models/model.conf --model_base_path=/app/models &

这是为什么?我如何在后台 运行 上述命令并转到 docker 文件中的入口点。感谢任何帮助。

Why is that?

因为您的 docker 容器配置为 运行 /usr/local/bin/gunicorn,如 ENTRYPOINT 指令所定义。

how can I run that above command in background and go to entrypoint in docker file.

执行此操作的标准方法是编写一个包装器脚本来执行您需要的所有程序。所以对于这个例子,像 run.sh:

#!/bin/bash

# Start tensorflow server
tensorflow_model_server --port=8500 --rest_api_port=8501 --model_config_file=/app/models/model.conf --model_base_path=/app/models &

# Start gunicorn
/usr/local/bin/gunicorn \
    -b 0.0.0.0:80 \
    -w 1 \
    -k uvicorn.workers.UvicornWorker app.main:app \
    --timeout 120 \
    --chdir /app \
    --log-level 'info' \
    --error-logfile '-'\
    --access-logfile '-'

然后在 Dockerfile 中:

ADD run.sh /usr/local/bin/run.sh
RUN chmod +x /usr/local/bin/run.sh
ENTRYPOINT /usr/local/bin/run.sh

您应该能够创建一个单独的Docker文件,运行TensorFlow 服务器:

FROM ubuntu:20.04

# Install the server
RUN echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | tee /etc/apt/sources.list.d/tensorflow-serving.list \
 && curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | apt-key add - \
 && apt-get update \
 && DEBIAN_FRONTEND=noninteractive \
    apt-get install --no-install-recommends --assume-yes \
      tensorflow-model-server

# Copy our local models into the image
COPY ./models /models

# Make the server be the main container command
CMD tensorflow_model_server --port=8500 --rest_api_port=8501 --model_config_file=/models/model.conf --model_base_path=/models

然后您可以从主应用程序的 Docker 文件中删除类似的行。

完成此操作后,您可以设置 Docker 启动两个容器的 Compose 设置:

version: '3.8'
services:
  application:
    build: .
    ports: ['8000:80']
    environment:
      - TENSORFLOW_URL=http://tf:8500
  tf:
    build:
      context: .
      dockerfile: Dockerfile.tensorflow
    # ports: ['8500:8500', '8501:8501']

您的应用程序需要知道如何查找 os.environ['TENSORFLOW_URL']。现在您有两个容器,每个容器都有其 CMD 到 运行 一个前台进程。

在较低级别,Docker 图像不包含任何 运行ning 进程;可以把它想象成一个 tar 文件加上 运行 的命令行。 tar 在 RUN 命令中在后台发送的任何内容都会在 RUN 命令完成后立即终止。