如何防止 swarm 容器在移除堆栈时成为孤儿?
How do I prevent swarm containers from becoming orphans upon removing the stack?
我 运行 一个 Docker Swarm 实例,具有以下重启脚本:
#!/usr/bin/env sh
docker stack rm owlab
sleep 10
docker stack deploy --compose-file ./docker-compose.yml owlab
docker-compose.yml:
version: "3"
services:
webapp-front:
image: "preprod.thatsowl.com:4200/webapp-front-dev"
ports:
- "80:80"
volumes:
- "../../webapp/frontend:/usr/src/app/"
webapp-back:
image: "webapp-back-dev"
ports:
- "4000:4000"
volumes:
- ../../webapp/backend/src:/usr/src/app/src
- ../../webapp/backend/uploads:/usr/src/app/uploads
- ../../webapp/shared:/usr/src/app/shared
- ../../webapp/backend/html-minifier.conf:/usr/src/app/html-minifier.conf
environment:
- HOST_TO=http://localhost
- DB_TO=local
depends_on:
- mongo
mongo:
image: mongo:4.2.8
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- ~/owlab_volumes/mongo:/data/db
mongo-express:
image: mongo-express
restart: always
ports:
- 7081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
有时,当我 运行 我的重启脚本时,一些容器仍然存在。我希望像往常一样在移除堆栈时移除所有容器,但有时 1 个或多个容器保持活动状态,就好像它们决定宣布独立一样。这种情况发生的频率和发生时涉及的容器数量显然是随机的。
➜ local git:(master) ✗ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
➜ local git:(master) ✗ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a75b1930778 mongo:4.2.8 "docker-entrypoint.s…" 2 hours ago Up 2 hours 27017/tcp owlab_mongo.1.q7np7im4sbpbtdxe0l1q989dk
d086085894d4 3e0babb28f48 "npm run dev" 2 hours ago Up 2 hours 4000/tcp owlab_webapp-back.1.ykpkjb79tjq21dbr3fmhjoa21
58d178bba35f preprod.thatsowl.com:4200/webapp-front-dev:latest "npm start" 2 hours ago Up 2 hours 80/tcp owlab_webapp-front.1.jam4w1z3py8m52msrgx8k23hc
我试图手动停止容器,但它冻结了,容器永远保持活动状态。
➜ local git:(master) ✗ docker container stop 3a75b1930778
我也试过使用图片 ,但它也卡住了:
➜ local git:(master) ✗ docker run --rm -v /var/run/docker/swarm/control.sock:/var/run/swarmd.sock dperny/tasknuke 3a75b1930778
Unable to find image 'dperny/tasknuke:latest' locally
latest: Pulling from dperny/tasknuke
88286f41530e: Pull complete
0e61a138cf9f: Pull complete
Digest: sha256:9e2e81971d201cee98f595f4516793333a4eb21bb9d7f7ca858ad2edb50353ad
Status: Downloaded newer image for dperny/tasknuke:latest
^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C
请注意,这个问题使我无法再次创建我的堆栈:
➜ local git:(master) ✗ ./restart.sh
Removing network owlab_default
Failed to remove network iykl6khzqe646fra5xvddf9ow: Error response from daemon: network iykl6khzqe646fra5xvddf9ow not foundFailed to remove some resources from stack: owlab
Creating service owlab_webapp-front
failed to create service owlab_webapp-front: Error response from daemon: network owlab_default not found
为什么有些容器在移除堆栈后随机(看起来)还活着?我该如何防止这种情况发生?
堆栈的部署和删除是异步执行的。您遇到的情况很可能是竞争条件。
确保等到已移除堆栈中的对象消失后再重新启动。我过去也有过这样的比赛条件..这种方法对我有用:
stack=owlab
docker stack rm ${stack}
types="service network config secret";
for type in $types;do
until [ -z "$(docker $type ls --filter label=com.docker.stack.namespace=${stack} -q)" ];do
sleep 1
done
done
docker stack deploy --compose-file ./docker-compose.yml ${stack}
我 运行 一个 Docker Swarm 实例,具有以下重启脚本:
#!/usr/bin/env sh
docker stack rm owlab
sleep 10
docker stack deploy --compose-file ./docker-compose.yml owlab
docker-compose.yml:
version: "3"
services:
webapp-front:
image: "preprod.thatsowl.com:4200/webapp-front-dev"
ports:
- "80:80"
volumes:
- "../../webapp/frontend:/usr/src/app/"
webapp-back:
image: "webapp-back-dev"
ports:
- "4000:4000"
volumes:
- ../../webapp/backend/src:/usr/src/app/src
- ../../webapp/backend/uploads:/usr/src/app/uploads
- ../../webapp/shared:/usr/src/app/shared
- ../../webapp/backend/html-minifier.conf:/usr/src/app/html-minifier.conf
environment:
- HOST_TO=http://localhost
- DB_TO=local
depends_on:
- mongo
mongo:
image: mongo:4.2.8
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- ~/owlab_volumes/mongo:/data/db
mongo-express:
image: mongo-express
restart: always
ports:
- 7081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
有时,当我 运行 我的重启脚本时,一些容器仍然存在。我希望像往常一样在移除堆栈时移除所有容器,但有时 1 个或多个容器保持活动状态,就好像它们决定宣布独立一样。这种情况发生的频率和发生时涉及的容器数量显然是随机的。
➜ local git:(master) ✗ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
➜ local git:(master) ✗ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a75b1930778 mongo:4.2.8 "docker-entrypoint.s…" 2 hours ago Up 2 hours 27017/tcp owlab_mongo.1.q7np7im4sbpbtdxe0l1q989dk
d086085894d4 3e0babb28f48 "npm run dev" 2 hours ago Up 2 hours 4000/tcp owlab_webapp-back.1.ykpkjb79tjq21dbr3fmhjoa21
58d178bba35f preprod.thatsowl.com:4200/webapp-front-dev:latest "npm start" 2 hours ago Up 2 hours 80/tcp owlab_webapp-front.1.jam4w1z3py8m52msrgx8k23hc
我试图手动停止容器,但它冻结了,容器永远保持活动状态。
➜ local git:(master) ✗ docker container stop 3a75b1930778
我也试过使用图片
➜ local git:(master) ✗ docker run --rm -v /var/run/docker/swarm/control.sock:/var/run/swarmd.sock dperny/tasknuke 3a75b1930778
Unable to find image 'dperny/tasknuke:latest' locally
latest: Pulling from dperny/tasknuke
88286f41530e: Pull complete
0e61a138cf9f: Pull complete
Digest: sha256:9e2e81971d201cee98f595f4516793333a4eb21bb9d7f7ca858ad2edb50353ad
Status: Downloaded newer image for dperny/tasknuke:latest
^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C
请注意,这个问题使我无法再次创建我的堆栈:
➜ local git:(master) ✗ ./restart.sh
Removing network owlab_default
Failed to remove network iykl6khzqe646fra5xvddf9ow: Error response from daemon: network iykl6khzqe646fra5xvddf9ow not foundFailed to remove some resources from stack: owlab
Creating service owlab_webapp-front
failed to create service owlab_webapp-front: Error response from daemon: network owlab_default not found
为什么有些容器在移除堆栈后随机(看起来)还活着?我该如何防止这种情况发生?
堆栈的部署和删除是异步执行的。您遇到的情况很可能是竞争条件。
确保等到已移除堆栈中的对象消失后再重新启动。我过去也有过这样的比赛条件..这种方法对我有用:
stack=owlab
docker stack rm ${stack}
types="service network config secret";
for type in $types;do
until [ -z "$(docker $type ls --filter label=com.docker.stack.namespace=${stack} -q)" ];do
sleep 1
done
done
docker stack deploy --compose-file ./docker-compose.yml ${stack}