Kubernetes pod error: creating multiple services

Kubernetes pod error: creating multiple services

我是 Kubernetes 的新手,如果我的问题看起来含糊不清,我深表歉意。我尽量详细说明。我通过 Kubernetes 在 Google Cloud 上有一个 pod,里面有一个 GPU。这个 GPU 负责处理一组任务,比方说图像分类。为此,我使用 kubernetes 创建了一个服务。我的 yaml 文件的服务部分如下所示。此外,此服务的 url 将是 http://model-server-service.default.svc.cluster.local,因为该服务的名称是 moderl-server-service

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: model-server
  name: model-server
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: model-server
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: model-server
    spec:
      containers:
      - args:
        - -t
        - "120"
        - -b
        - "0.0.0.0"
        - app:flask_app
        command:
        - gunicorn
        env:
        - name: ENV
          value: staging
        - name: GCP
          value: "2"
        image: gcr.io/my-production/my-model-server: myGitHash
        imagePullPolicy: Always
        name: model-server
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        resources:
          limits:
           nvidia.com/gpu: 1
        ports:
          - containerPort: 8000
            protocol: TCP
        volumeMounts:
        - name: model-files
          mountPath: /model-server/models
      # These containers are run during pod initialization
      initContainers:
      - name: model-download
        image: gcr.io/my-production/my-model-server: myGitHash
        command:
        - gsutil
        - cp
        - -r
        - gs://my-staging-models/*
        - /model-files/
        volumeMounts:
        - name: model-files
          mountPath: "/model-files"
      volumes:
      - name: model-files
        emptyDir: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        runAsUser: 0
      terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: model-server
  name: model-server-service
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8000
  selector:
    app: model-server
  sessionAffinity: None
  type: ClusterIP

我的问题从这里开始。我正在创建一组新任务。对于这组新任务,我将需要大量内存,所以我不想使用以前的服务。我想将它作为一项单独的新服务的一部分来做。以下内容 url http://model-server-heavy-service.default.svc.cluster.local。我尝试创建一个新的 yaml 文件 model-server-heavy.yaml。在这个新的 yaml 文件中,我将服务名称从 model-server-service 更改为 model-server-heavy-service。另外,我将应用程序的名称和名称从 model-server 更改为 model-sever-heavy。所以最终的 yaml 文件看起来像我放在这个 post 末尾的东西。不幸的是,新模型服务器不起作用,我在 kubernetes 上收到以下关于新模型服务器的消息。

model-server-asdhjs-asd            1/1     Running            0          21m
model-server-heavy-xnshk   0/1     **CrashLoopBackOff**   8          21m

有人可以阐明我做错了什么以及我想到的替代方案是什么吗?为什么我收到第二个模型服务器的消息 CrashLoopBackOff?我在第二个模型服务器上做错了什么。

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: model-server-heavy
  name: model-server-heavy
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: model-server-heavy
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: model-server-heavy
    spec:
      containers:
      - args:
        - -t
        - "120"
        - -b
        - "0.0.0.0"
        - app:flask_app
        command:
        - gunicorn
        env:
        - name: ENV
          value: staging
        - name: GCP
          value: "2"
        image: gcr.io/my-production/my-model-server:mgGitHash
        imagePullPolicy: Always
        name: model-server-heavy
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        resources:
          limits:
           nvidia.com/gpu: 1
        ports:
          - containerPort: 8000
            protocol: TCP
        volumeMounts:
        - name: model-files
          mountPath: /model-server-heavy/models
      # These containers are run during pod initialization
      initContainers:
      - name: model-download
        image: gcr.io/my-production/my-model-server:myGitHash
        command:
        - gsutil
        - cp
        - -r
        - gs://my-staging-models/*
        - /model-files/
        volumeMounts:
        - name: model-files
          mountPath: "/model-files"
      volumes:
      - name: model-files
        emptyDir: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        runAsUser: 0
      terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: model-server-heavy
  name: model-server-heavy-service
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8000
  selector:
    app: model-server-heavy
  sessionAffinity: None
  type: ClusterIP

感谢@dawid-kruk 和@patrick-w 我必须在 model-sever-heavy.yaml 中进行两次修改才能使其正常工作。

  1. 将挂载路径从 /model-server-heavy/models 更改为 /model-server/models

  2. model-sever-heavy.yaml 文件的第 38 行,我应该将名称从 model-server-heavy 更改为 model-sever

我首先尝试通过应用第 1 项来解决问题,但没有成功。然后我也尝试了第二项并修复了它。我需要同时设置 1 和 2 才能使服务器正常工作。我明白为什么我必须对第一项进行更改,但不确定第二项。