Docker FastAPI 与 NGINX 的负载平衡
Docker FastAPI load balancing with NGINX
我正在寻找一些 direction/critique 关于网络负载平衡的东西 API 我正在研究。
这是我目前正在做的事情,但我有疑问:
- 我使用“docker build -t app .”
在 mainApp 中构建第一个图像(应用程序)
- 我使用“docker build -t nginx .”
在 nginx 文件夹中构建了应该是负载均衡器的东西
- 我 运行 Windows 桌面 Docker 中单独容器中的图像;端口 8080 上的应用程序和端口 8090 上的 nginx。
- 当我在网络浏览器中加载 localhost:8090 时,它似乎确实在不同的进程 ID 之间切换,但它通常在 3 个之间,而不仅仅是我试图在 nginx.conf 文件中声明的 2 个.这让我相信它没有真正正确设置并且进程 ID returns 具有误导性。有没有更好的方法来测试这个?
我的文件结构如下:
mainApp
app
main.py
Dockerfile
requirements.txt
nginx
Dockerfile
nginx.conf
代码:
main.py
app = FastAPI()
@app.get("/")
def read_root():
return {"Served From": str(os.getpid())}
Docker文件(在 mainApp 内)
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
RUN pip install --upgrade pip
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . /app
Docker文件(nginx 内部)
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d/
nginx.conf
upstream loadbalancer {
server 192.168.80.12:8080;
server 192.168.80.12:8081;
}
server {
listen 80;
location / {
proxy_pass http://loadbalancer;
}
}
uvicorn
在 docker 图像中 will by default use the same number of workers as CPUs available on the server。 worker 的最小数量默认为 2(所以如果你只有一个核心,仍然会启动两个 worker 来处理请求)。
然后,worker pid 将根据哪个 uvicorn worker 处理您的连接而改变。
WORKERS_PER_CORE This image will check how many CPU cores are
available in the current server running your container.
It will set the number of workers to the number of CPU cores
multiplied by this value.
By default:
1
You can set it like:
docker run -d -p 80:80 -e WORKERS_PER_CORE="3" myimage
If you used the
value 3 in a server with 2 CPU cores, it would run 6 worker processes.
You can use floating point values too.
So, for example, if you have a big server (let's say, with 8 CPU
cores) running several applications, and you have a FastAPI
application that you know won't need high performance. And you don't
want to waste server resources. You could make it use 0.5 workers per
CPU core. For example:
docker run -d -p 80:80 -e WORKERS_PER_CORE="0.5" myimage
In a server
with 8 CPU cores, this would make it start only 4 worker processes.
Note: By default, if WORKERS_PER_CORE is 1 and the server has only 1
CPU core, instead of starting 1 single worker, it will start 2. This
is to avoid bad performance and blocking applications (server
application) on small machines (server machine/cloud/etc). This can be
overridden using WEB_CONCURRENCY.
相反,您可以使用 socket.gethostname()
获取服务 docker 容器的主机名并查看是否不同。另一种选择是查看容器本身的日志——镜像默认启用了访问日志(或者自己将一些内容输出到标准输出)并查看两个容器都收到请求。你可以 use docker logs
to see the log of a container.
我正在寻找一些 direction/critique 关于网络负载平衡的东西 API 我正在研究。
这是我目前正在做的事情,但我有疑问:
- 我使用“docker build -t app .” 在 mainApp 中构建第一个图像(应用程序)
- 我使用“docker build -t nginx .” 在 nginx 文件夹中构建了应该是负载均衡器的东西
- 我 运行 Windows 桌面 Docker 中单独容器中的图像;端口 8080 上的应用程序和端口 8090 上的 nginx。
- 当我在网络浏览器中加载 localhost:8090 时,它似乎确实在不同的进程 ID 之间切换,但它通常在 3 个之间,而不仅仅是我试图在 nginx.conf 文件中声明的 2 个.这让我相信它没有真正正确设置并且进程 ID returns 具有误导性。有没有更好的方法来测试这个?
我的文件结构如下:
mainApp app main.py Dockerfile requirements.txt nginx Dockerfile nginx.conf
代码:
main.py
app = FastAPI()
@app.get("/")
def read_root():
return {"Served From": str(os.getpid())}
Docker文件(在 mainApp 内)
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
RUN pip install --upgrade pip
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . /app
Docker文件(nginx 内部)
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d/
nginx.conf
upstream loadbalancer {
server 192.168.80.12:8080;
server 192.168.80.12:8081;
}
server {
listen 80;
location / {
proxy_pass http://loadbalancer;
}
}
uvicorn
在 docker 图像中 will by default use the same number of workers as CPUs available on the server。 worker 的最小数量默认为 2(所以如果你只有一个核心,仍然会启动两个 worker 来处理请求)。
然后,worker pid 将根据哪个 uvicorn worker 处理您的连接而改变。
WORKERS_PER_CORE This image will check how many CPU cores are available in the current server running your container.
It will set the number of workers to the number of CPU cores multiplied by this value.
By default:
1
You can set it like:
docker run -d -p 80:80 -e WORKERS_PER_CORE="3" myimage
If you used the value 3 in a server with 2 CPU cores, it would run 6 worker processes.
You can use floating point values too.
So, for example, if you have a big server (let's say, with 8 CPU cores) running several applications, and you have a FastAPI application that you know won't need high performance. And you don't want to waste server resources. You could make it use 0.5 workers per CPU core. For example:
docker run -d -p 80:80 -e WORKERS_PER_CORE="0.5" myimage
In a server with 8 CPU cores, this would make it start only 4 worker processes.
Note: By default, if WORKERS_PER_CORE is 1 and the server has only 1 CPU core, instead of starting 1 single worker, it will start 2. This is to avoid bad performance and blocking applications (server application) on small machines (server machine/cloud/etc). This can be overridden using WEB_CONCURRENCY.
相反,您可以使用 socket.gethostname()
获取服务 docker 容器的主机名并查看是否不同。另一种选择是查看容器本身的日志——镜像默认启用了访问日志(或者自己将一些内容输出到标准输出)并查看两个容器都收到请求。你可以 use docker logs
to see the log of a container.