带有 linkerd 和 argo rollouts 的 Canary rollouts
Canary rollouts with linkerd and argo rollouts
我正在尝试为演示配置金丝雀发布,但我无法让流量拆分与 linkerd 一起使用。有趣的是我能够使用 istio 来实现这个,我发现 istio 比 linkerd 复杂得多。
我有一个基本的 go-lang 服务定义如下:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: fish
spec:
[...]
strategy:
canary:
canaryService: canary-svc
stableService: stable-svc
trafficRouting:
smi: {}
steps:
- setWeight: 5
- pause: {}
- setWeight: 20
- pause: {}
- setWeight: 50
- pause: {}
- setWeight: 80
- pause: {}
---
apiVersion: v1
kind: Service
metadata:
name: canary-svc
spec:
selector:
app: fish
ports:
- name: http
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: stable-svc
spec:
selector:
app: fish
ports:
- name: http
port: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: fish
annotations:
kubernetes.io/ingress.class: 'nginx'
cert-manager.io/cluster-issuer: letsencrypt-production
cert-manager.io/acme-challenge-type: dns01
external-dns.alpha.kubernetes.io/hostname: fish.local
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
spec:
rules:
- host: fish.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: stable-svc
port:
number: 8080
当我通过 ArgoCD 进行部署(同步)时,我可以看到流量分配是 50/50:
- apiVersion: split.smi-spec.io/v1alpha2
kind: TrafficSplit
metadata:
[...]
name: fish
namespace: default
spec:
backends:
- service: canary-svc
weight: "50"
- service: stable-svc
weight: "50"
service: stable-svc
但是在 while 循环中执行 curl 命令我只得到稳定的 svc。我唯一一次看到变化是在我将服务完全移动到 100% 之后。
我试着按照这个:https://argoproj.github.io/argo-rollouts/getting-started/smi/
如有任何帮助,我们将不胜感激。
谢谢
阅读此文后:https://linkerd.io/2.10/tasks/using-ingress/我发现您需要使用特殊注释修改入口控制器:
$ kubectl get deployment <ingress-controller> -n <ingress-namespace> -o yaml | linkerd inject --ingress - | kubectl apply -f -
TLDR;如果您想要 Linkerd 功能,如服务配置文件、流量拆分等,则需要额外的配置才能使 Ingress 控制器的 Linkerd 代理 运行 处于入口模式。
所以 this issue but the TL;DR is ingresses tend to target individual pods instead of the service address. Putting Linkerd's proxy in ingress mode tells it to override that behaviour. NGINX does already have a setting that will let it hit services instead of endpoints directly, you can see that in their docs here 中有更多上下文。
我正在尝试为演示配置金丝雀发布,但我无法让流量拆分与 linkerd 一起使用。有趣的是我能够使用 istio 来实现这个,我发现 istio 比 linkerd 复杂得多。
我有一个基本的 go-lang 服务定义如下:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: fish
spec:
[...]
strategy:
canary:
canaryService: canary-svc
stableService: stable-svc
trafficRouting:
smi: {}
steps:
- setWeight: 5
- pause: {}
- setWeight: 20
- pause: {}
- setWeight: 50
- pause: {}
- setWeight: 80
- pause: {}
---
apiVersion: v1
kind: Service
metadata:
name: canary-svc
spec:
selector:
app: fish
ports:
- name: http
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: stable-svc
spec:
selector:
app: fish
ports:
- name: http
port: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: fish
annotations:
kubernetes.io/ingress.class: 'nginx'
cert-manager.io/cluster-issuer: letsencrypt-production
cert-manager.io/acme-challenge-type: dns01
external-dns.alpha.kubernetes.io/hostname: fish.local
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
spec:
rules:
- host: fish.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: stable-svc
port:
number: 8080
当我通过 ArgoCD 进行部署(同步)时,我可以看到流量分配是 50/50:
- apiVersion: split.smi-spec.io/v1alpha2
kind: TrafficSplit
metadata:
[...]
name: fish
namespace: default
spec:
backends:
- service: canary-svc
weight: "50"
- service: stable-svc
weight: "50"
service: stable-svc
但是在 while 循环中执行 curl 命令我只得到稳定的 svc。我唯一一次看到变化是在我将服务完全移动到 100% 之后。
我试着按照这个:https://argoproj.github.io/argo-rollouts/getting-started/smi/
如有任何帮助,我们将不胜感激。
谢谢
阅读此文后:https://linkerd.io/2.10/tasks/using-ingress/我发现您需要使用特殊注释修改入口控制器:
$ kubectl get deployment <ingress-controller> -n <ingress-namespace> -o yaml | linkerd inject --ingress - | kubectl apply -f -
TLDR;如果您想要 Linkerd 功能,如服务配置文件、流量拆分等,则需要额外的配置才能使 Ingress 控制器的 Linkerd 代理 运行 处于入口模式。
所以 this issue but the TL;DR is ingresses tend to target individual pods instead of the service address. Putting Linkerd's proxy in ingress mode tells it to override that behaviour. NGINX does already have a setting that will let it hit services instead of endpoints directly, you can see that in their docs here 中有更多上下文。