如何将前台进程添加到 Docker 容器

How do I add a foreground process to a Docker container

当我使用 helm upgrade --install flextoeco . 部署 .Net Core API 时遇到“CrashLoopBackoff”错误:

NAME                            READY   STATUS             RESTARTS        AGE
flextoecoapi-6bb7cdd846-r6c67   0/1     CrashLoopBackOff   4 (38s ago)     3m8s
flextoecoapi-fb7f7b556-tgbrv    0/1     CrashLoopBackOff   219 (53s ago)   10h
mssql-depl-86c86b5f44-ldj48     0/1     Pending  

我有 运行 ks describe pod flextoecoapi-6bb7cdd846-r6c67 部分输出如下:

Events:
Type     Reason     Age                     From               Message
  ----     ------     ----                    ----               -------
  Normal   Scheduled  5m4s                    default-scheduler  Successfully assigned default/flextoecoapi-6bb7cdd846-r6c67 to fbcdcesdn02
  Normal   Pulling    5m3s                    kubelet            Pulling image "golide/flextoeco:1.1.1"
  Normal   Pulled     4m57s                   kubelet            Successfully pulled image "golide/flextoeco:1.1.1" in 6.2802081s
  Normal   Killing    4m34s                   kubelet            Container flextoeco failed liveness probe, will be restarted
  Normal   Created    4m33s (x2 over 4m57s)   kubelet            Created container flextoeco
  Normal   Started    4m33s (x2 over 4m56s)   kubelet            Started container flextoeco
  Normal   Pulled     4m33s                   kubelet            Container image "golide/flextoeco:1.1.1" already present on machine
  Warning  Unhealthy  4m14s (x12 over 4m56s)  kubelet            Readiness probe failed: Get "http://10.244.6.59:80/": dial tcp 10.244.0.59:80: connect: connection refused
  Warning  Unhealthy  4m14s (x5 over 4m54s)   kubelet            Liveness probe failed: Get "http://10.244.6.59:80/": dial tcp 10.244.0.59:80: connect: connection refused
  Warning  BackOff    3s (x10 over 2m33s)     kubelet            Back-off restarting failed container

根据建议 看来我有很多选项可以解决最值得注意的问题: i) 向 Dockerfile 添加一个命令,确保有一些前台进程 运行ning ii) 延长 LivenessProbe initialDelaySeconds

我选择了第一个并按如下方式编辑了我的 Dockerfile :

FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:3.1
WORKDIR /app
ENV ASPNETCORE_URLS http://+:5000
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "FlexToEcocash.dll"]
CMD tail -f /dev/null

此更改后我仍然遇到相同的错误。

更新

已跳过:当我不使用 helm 时,部署工作完美 即我可以执行 kubectl 申请 deployment/service/nodeport/clusterip 并部署 API没有问题.

我已尝试如下更新 values.yaml 和 service.yaml,但在重新部署后,CrashLoopBackOff 错误仍然存​​在:

templates/service.yaml

apiVersion: v1
kind: Service
metadata:
  name: {{ include "flextoeco.fullname" . }}
  labels:
    {{- include "flextoeco.labels" . | nindent 4 }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.port }}
      targetPort: http
      protocol: TCP
      name: http
  selector:
    {{- include "flextoeco.selectorLabels" . | nindent 4 }}

values.yaml
我在这里明确指定了CPU和内存使用

replicaCount: 1
image:
  repository: golide/flextoeco
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: "1.1.2"

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  
podAnnotations: {}

podSecurityContext: {}
  # fsGroup: 2000

securityContext: {}
 
service:
  type: ClusterIP
  port: 80

ingress:
  enabled: false
  className: ""
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: flextoeco.local
      paths:
        - path: /
          pathType: ImplementationSpecific
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources:
  limits:
    cpu: 1
    memory: 1Gi
  requests:
    cpu: 100m
    memory: 250Mi

autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

nodeSelector: {}

templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "flextoeco.fullname" . }}
  labels:
    {{- include "flextoeco.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "flextoeco.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      labels:
        {{- include "flextoeco.selectorLabels" . | nindent 8 }}
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      serviceAccountName: {{ include "flextoeco.serviceAccountName" . }}
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
 labels:
        {{- include "flextoeco.selectorLabels" . | nindent 8 }}
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      serviceAccountName: {{ include "flextoeco.serviceAccountName" . }}
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
            ports:
            - name: http
              containerPort: 80
              protocol: TCP
          livenessProbe:
            tcpSocket:
              port: 8085
            initialDelaySeconds: 300
            periodSeconds: 30
            timeoutSeconds: 20
          readinessProbe:
            tcpSocket:
              port: 8085
            initialDelaySeconds: 300
            periodSeconds: 30
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
        {{- toYaml . | nindent 8 }}
      {{- end }}

在部署规范中,我需要使用端口 5000 作为 containerPort: 值以及探测器中的端口:。我的应用程序正在侦听端口 5000 :

   - name: http
          containerPort: 5000
          protocol: TCP
      livenessProbe:
        tcpSocket:
          port: 5000
        initialDelaySeconds: 300
        periodSeconds: 30
        timeoutSeconds: 20
      readinessProbe:
        tcpSocket:
          port: 5000
        initialDelaySeconds: 300
        periodSeconds: 30

service.yaml中的配置是正确的:如果部署规范将名称 http 映射到端口 5000,则在服务中引用 targetPort: http 是正确的。