从 Skaffold 和清单迁移到 DevSpace 和组件图表后 Ingress 不工作

Ingress isn't working after migrating from Skaffold and manifests to DevSpace and component charts

我一直在玩弄带有 Helm 图表的 DevSpace,并可能从 Skaffold 和 Kubernetes 清单迁移到它。我似乎无法让入口控制器为本地开发工作:返回 404 Not Found。我可以通过端口转发到达它,但是,在 localhost:3000.

就像我一直做的那样,我 installed the ingress-nginx controller 首先 docker-desktop 有:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml

然后在我的 devspace.yaml 我有以下内容:

version: v1beta10

images:
  client:
    image: app/client
    dockerfile: client/Dockerfile
    context: client/

deployments:
- name: client
  helm:
    componentChart: true
    values:
      containers:
      - image: app/client
      service:
        ports:
          - port: 3000
      ingress:
        name: ingress
        rules: 
        - host: localhost
          path: /
          pathType: Prefix
          servicePort: 3000
          serviceName: client
dev:
  ports:
  - name: client
    imageSelector: app/client
    forward:
    - port: 3000
      remotePort: 3000
  sync:
  - name: client
    imageSelector: app/client
    localSubPath: ./client
    excludePaths: 
    - .git/
    - node_modules/

两种配置的 Dockerfile 相同。

FROM node:14-alpine
WORKDIR /app
COPY ./package.json ./
ENV CI=true
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]

此外,我注意到当我添加服务(例如 /api/admin 等)和相应的 ingress.rules 时,它会创建每个服务的入口,而不是整个应用程序只有一个。

作为参考,这是我以前对 skaffold 和清单所做的:

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: "nginx"
  name: ingress-dev
spec:
  rules:
    - host: localhost
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: client-cluster-ip-service-dev
                port:
                  number: 3000
# client.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: client-deployment-dev
spec:
  replicas: 1
  revisionHistoryLimit: 5
  selector:
    matchLabels:
      component: client
      environment: development
  template:
    metadata:
      labels:
        component: client
        environment: development
    spec:
      containers:
        - name: client
          image: client
          ports:
            - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: client-cluster-ip-service-dev
spec:
  type: ClusterIP
  selector:
    component: client
    environment: development
  ports:
    - port: 3000
      targetPort: 3000
# skaffold.yaml
apiVersion: skaffold/v2beta1
kind: Config
build:
  artifacts:
  - image: client
    context: client
    sync:
      manual:
      - src: 'src/**/*.js'
        dest: .
      - src: 'src/**/*.jsx'
        dest: .
      - src: 'package.json'
        dest: .
      - src: 'public/**/*.html'
        dest: .
      - src: 'src/assets/sass/**/*.scss'
        dest: .
      - src: 'src/build/**/*.js'
        dest: .
    docker:
      dockerfile: Dockerfile.dev
  local:
    push: false
deploy:
  kubectl:
    manifests:
      - k8s/ingress.yaml 
      - k8s/client.yaml

我更喜欢在开发过程中使用 ingress 控制器而不是端口转发。这样我就可以转到 localhost/localhost/adminlocalhost/api 等。我已经 运行 遇到了严重的错误,在此之前没有使用端口转发,但使用入口控制器,所以我不相信它。

任何建议:

  1. 让入口工作以便它到达服务?
  2. 设置 devspace.yaml 以便它为每个服务创建一个入口而不是一个入口?

devspace render:

---
# Source: component-chart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: "client"
  labels:
    "app.kubernetes.io/name": "client"
    "app.kubernetes.io/managed-by": "Helm"
  annotations:
    "helm.sh/chart": "component-chart-0.8.2"
spec:
  externalIPs:
  ports:
    - name: "port-0"
      port: 3000
      targetPort: 3000
      protocol: "TCP"
  selector:
    "app.kubernetes.io/name": "devspace-app"
    "app.kubernetes.io/component": "client"
  type: "ClusterIP"
---
# Source: component-chart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: "client"
  labels:
    "app.kubernetes.io/name": "devspace-app"
    "app.kubernetes.io/component": "client"
    "app.kubernetes.io/managed-by": "Helm"
  annotations:
    "helm.sh/chart": "component-chart-0.8.2"
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      "app.kubernetes.io/name": "devspace-app"
      "app.kubernetes.io/component": "client"
      "app.kubernetes.io/managed-by": "Helm"
  template:
    metadata:
      labels:
        "app.kubernetes.io/name": "devspace-app"
        "app.kubernetes.io/component": "client"
        "app.kubernetes.io/managed-by": "Helm"
      annotations:
        "helm.sh/chart": "component-chart-0.8.2"
    spec:
      imagePullSecrets:
      nodeSelector:
        null
      nodeName:
        null
      affinity:
        null
      tolerations:
        null
      dnsConfig:
        null
      hostAliases:
        null
      overhead:
        null
      readinessGates:
        null
      securityContext:
        null
      topologySpreadConstraints:
        null
      terminationGracePeriodSeconds: 5
      ephemeralContainers:
        null
      containers:
        - image: "croner-app/client:AtrvTRR"
          name: "container-0"
          command:
          args:
          env:
            null
          envFrom:
            null
          securityContext:
            null
          lifecycle:
            null
          livenessProbe:
            null
          readinessProbe:
            null
          startupProbe:
            null
          volumeDevices:
            null
          volumeMounts:
      initContainers:
      volumes:
  volumeClaimTemplates:
---
# Source: component-chart/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: "ingress"
  labels:
    "app.kubernetes.io/name": "client"
    "app.kubernetes.io/managed-by": "Helm"
  annotations:
    "helm.sh/chart": "component-chart-0.8.2"
spec:
  rules:
  - host: "localhost"
    http:
      paths:
      - backend:
          serviceName: client
          servicePort: 3000
        path: "/"
        pathType: "Prefix"
---

我能看到的最大区别是我以前使用的是 apiVersion: networking.k8s.io/v1devspaceapiVersion: extensions/v1beta1。也许我正在申请的入口控制器 controller-v1.0.0 不兼容?不确定...

要对此进行调试,您可能需要 运行 devspace render 显示 DevSpace 在将它们部署到集群之前从 Helm 图表生成的普通清单。这样你就可以看到与你的支架清单相比有什么不同。或者,您可以使用以下命令检查集群内部:

kubectl get service --all -o yaml   # to see all services
kubectl get ingress --all -o yaml   # to see all ingresses

我对问题实际原因的有根据的猜测是:由于您使用的是 componentChart: true,因此不应为入口指定 serviceName: client。我假设此 serviceName 与组件图表根据 helm 部署的发布名称生成的服务名称不匹配。所以只需从 devspace.yaml 中删除 serviceName: client。或者,您可以为服务指定 name: client 以确保它匹配。

有关完整的组件图表规范,请参阅文档:https://devspace.sh/component-chart/docs/configuration/reference

在这种特殊情况下,解决方案是使用与 DevSpace 使用的版本兼容的旧版本 ingress-nginx 控制器。就我而言,我使用的是 devspace v5.16.0-alpha.0 并且以下控制器与之配合使用:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.49.0/deploy/static/provider/cloud/deploy.yaml

由于此解决方案将随着 devspaceingress-nginx 的更新版本而改变,因此一般来说:

  • 确保 ingress-nginx 控制器版本和 devspace 版本兼容。
  • 检查 devspace render 以查看入口配置是如何生成的,以及 apiVersion 是否与您 kubectl apply.
  • 的入口控制器版本兼容