两个 pods 之间的 istio 路由
istio routing between two pods
试图在 kubernetes 上使用 istio,但似乎我缺少一些基础知识,或者我正在做一些事情。我对 kubernetes 很有经验,但是 istio 和它的 virtualservice 让我有点困惑。
我创建了 2 个部署 (helloworld-v1/helloworld-v2)。两者具有相同的图像,唯一不同的是环境变量 - 输出版本:"v1" 或版本:"v2"。我正在使用我编写的一个小测试容器,它基本上 returns headers 我进入了应用程序。名为 "helloworld" 的 kubernetes 服务可以访问两者。
我创建了一个 Virtualservice 和一个 Destinationrule
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- helloworld
http:
- route:
- destination:
host: helloworld
subset: v1
weight: 90
- destination:
host: helloworld
subset: v2
weight: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: helloworld
spec:
host: helloworld
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
根据未提及任何网关的文档,应使用内部 "mesh" 网关。
Sidecar 容器已成功附加:
kubectl -n demo get all
NAME READY STATUS RESTARTS AGE
pod/curl-6657486bc6-w9x7d 2/2 Running 0 3h
pod/helloworld-v1-d4dbb89bd-mjw64 2/2 Running 0 6h
pod/helloworld-v2-6c86dfd5b6-ggkfk 2/2 Running 0 6h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/helloworld ClusterIP 10.43.184.153 <none> 80/TCP 6h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/curl 1 1 1 1 3h
deployment.apps/helloworld-v1 1 1 1 1 6h
deployment.apps/helloworld-v2 1 1 1 1 6h
NAME DESIRED CURRENT READY AGE
replicaset.apps/curl-6657486bc6 1 1 1 3h
replicaset.apps/helloworld-v1-d4dbb89bd 1 1 1 6h
replicaset.apps/helloworld-v2-6c86dfd5b6 1 1 1 6h
当我从 "outside" (istio-ingressgateway) 访问应用程序时一切正常,v2 被调用一次,v1 9 被调用九次:
curl --silent -H 'host: helloworld' http://localhost
{"host":"helloworld","user-agent":"curl/7.47.0","accept":"*/*","x-forwarded-for":"10.42.0.0","x-forwarded-proto":"http","x-envoy-internal":"true","x-request-id":"a6a2d903-360f-91a0-b96e-6458d9b00c28","x-envoy-decorator-operation":"helloworld:80/*","x-b3-traceid":"e36ef1ba2229177e","x-b3-spanid":"e36ef1ba2229177e","x-b3-sampled":"1","x-istio-attributes":"Cj0KF2Rlc3RpbmF0aW9uLnNlcnZpY2UudWlkEiISIGlzdGlvOi8vZGVtby9zZXJ2aWNlcy9oZWxsb3dvcmxkCj8KGGRlc3RpbmF0aW9uLnNlcnZpY2UuaG9zdBIjEiFoZWxsb3dvcmxkLmRlbW8uc3ZjLmNsdXN0ZXIubG9jYWwKJwodZGVzdGluYXRpb24uc2VydmljZS5uYW1lc3BhY2USBhIEZGVtbwooChhkZXN0aW5hdGlvbi5zZXJ2aWNlLm5hbWUSDBIKaGVsbG93b3JsZAo6ChNkZXN0aW5hdGlvbi5zZXJ2aWNlEiMSIWhlbGxvd29ybGQuZGVtby5zdmMuY2x1c3Rlci5sb2NhbApPCgpzb3VyY2UudWlkEkESP2t1YmVybmV0ZXM6Ly9pc3Rpby1pbmdyZXNzZ2F0ZXdheS01Y2NiODc3NmRjLXRyeDhsLmlzdGlvLXN5c3RlbQ==","content-length":"0","version":"v1"}
"version": "v1",
"version": "v1",
"version": "v2",
"version": "v1",
"version": "v1",
"version": "v1",
"version": "v1",
"version": "v1",
"version": "v1",
但是一旦我从 pod 中执行卷曲(在这种情况下只是 byrnedo/alpine-curl)反对服务,事情就会开始变得混乱:
curl --silent -H 'host: helloworld' http://helloworld.demo.svc.cluster.local
{"host":"helloworld","user-agent":"curl/7.61.0","accept":"*/*","version":"v1"}
"version":"v2"
"version":"v2"
"version":"v1"
"version":"v1"
"version":"v2"
"version":"v2"
"version":"v1"
"version":"v2“
"version":"v1"
不仅我错过了所有 istio 属性(我在服务到服务通信中理解这些属性,因为据我所知,它们是在请求首次通过网关进入网格时设置的),而且我的余额看起来像kubernetes 服务的默认 50:50 余额。
我需要做什么才能在 inter-service 通信中达到相同的 1:9 平衡?我是否必须创建第二个 "internal" 网关来代替服务 fqdn?我错过了定义吗?从 pod 中调用服务 fqdn 是否应该遵守虚拟服务路由?
使用的istio版本是1.0.1,使用的kubernetes版本是v1.11.1。
更新
按照建议部署 sleep-pod(这次不依赖于演示命名空间的 auto-injection),而是按照睡眠示例
中的描述手动部署
kubectl -n demo get deployment sleep -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
sleep 1 1 1 1 2m
sleep,istio-proxy tutum/curl,docker.io/istio/proxyv2:1.0.1 app=sleep
也把Virtualservice改成了0/100,一看能不能用。不幸的是,这并没有太大变化:
export SLEEP_POD=$(kubectl get -n demo pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user- agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v1"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v1"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"
路由规则在客户端进行评估,因此您需要确保您 运行 curl 来自的 pod 附加了一个 Istio sidecar。如果它只是直接调用服务,它无法评估您设置的 90-10 规则,而是只会掉到默认的 kube round-robin 路由。
Istio sleep sample 非常适合用作测试客户端 pod。
找到解决方案,先决条件之一(我忘记了)是正确的路由需要命名端口:@see https://istio.io/docs/setup/kubernetes/spec-requirements/。
错误:
spec:
ports:
- port: 80
protocol: TCP
targetPort: 3000
右:
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
使用名称 http 后,一切都像魅力一样
试图在 kubernetes 上使用 istio,但似乎我缺少一些基础知识,或者我正在做一些事情。我对 kubernetes 很有经验,但是 istio 和它的 virtualservice 让我有点困惑。
我创建了 2 个部署 (helloworld-v1/helloworld-v2)。两者具有相同的图像,唯一不同的是环境变量 - 输出版本:"v1" 或版本:"v2"。我正在使用我编写的一个小测试容器,它基本上 returns headers 我进入了应用程序。名为 "helloworld" 的 kubernetes 服务可以访问两者。
我创建了一个 Virtualservice 和一个 Destinationrule
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- helloworld
http:
- route:
- destination:
host: helloworld
subset: v1
weight: 90
- destination:
host: helloworld
subset: v2
weight: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: helloworld
spec:
host: helloworld
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
根据未提及任何网关的文档,应使用内部 "mesh" 网关。 Sidecar 容器已成功附加:
kubectl -n demo get all
NAME READY STATUS RESTARTS AGE
pod/curl-6657486bc6-w9x7d 2/2 Running 0 3h
pod/helloworld-v1-d4dbb89bd-mjw64 2/2 Running 0 6h
pod/helloworld-v2-6c86dfd5b6-ggkfk 2/2 Running 0 6h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/helloworld ClusterIP 10.43.184.153 <none> 80/TCP 6h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/curl 1 1 1 1 3h
deployment.apps/helloworld-v1 1 1 1 1 6h
deployment.apps/helloworld-v2 1 1 1 1 6h
NAME DESIRED CURRENT READY AGE
replicaset.apps/curl-6657486bc6 1 1 1 3h
replicaset.apps/helloworld-v1-d4dbb89bd 1 1 1 6h
replicaset.apps/helloworld-v2-6c86dfd5b6 1 1 1 6h
当我从 "outside" (istio-ingressgateway) 访问应用程序时一切正常,v2 被调用一次,v1 9 被调用九次:
curl --silent -H 'host: helloworld' http://localhost
{"host":"helloworld","user-agent":"curl/7.47.0","accept":"*/*","x-forwarded-for":"10.42.0.0","x-forwarded-proto":"http","x-envoy-internal":"true","x-request-id":"a6a2d903-360f-91a0-b96e-6458d9b00c28","x-envoy-decorator-operation":"helloworld:80/*","x-b3-traceid":"e36ef1ba2229177e","x-b3-spanid":"e36ef1ba2229177e","x-b3-sampled":"1","x-istio-attributes":"Cj0KF2Rlc3RpbmF0aW9uLnNlcnZpY2UudWlkEiISIGlzdGlvOi8vZGVtby9zZXJ2aWNlcy9oZWxsb3dvcmxkCj8KGGRlc3RpbmF0aW9uLnNlcnZpY2UuaG9zdBIjEiFoZWxsb3dvcmxkLmRlbW8uc3ZjLmNsdXN0ZXIubG9jYWwKJwodZGVzdGluYXRpb24uc2VydmljZS5uYW1lc3BhY2USBhIEZGVtbwooChhkZXN0aW5hdGlvbi5zZXJ2aWNlLm5hbWUSDBIKaGVsbG93b3JsZAo6ChNkZXN0aW5hdGlvbi5zZXJ2aWNlEiMSIWhlbGxvd29ybGQuZGVtby5zdmMuY2x1c3Rlci5sb2NhbApPCgpzb3VyY2UudWlkEkESP2t1YmVybmV0ZXM6Ly9pc3Rpby1pbmdyZXNzZ2F0ZXdheS01Y2NiODc3NmRjLXRyeDhsLmlzdGlvLXN5c3RlbQ==","content-length":"0","version":"v1"}
"version": "v1",
"version": "v1",
"version": "v2",
"version": "v1",
"version": "v1",
"version": "v1",
"version": "v1",
"version": "v1",
"version": "v1",
但是一旦我从 pod 中执行卷曲(在这种情况下只是 byrnedo/alpine-curl)反对服务,事情就会开始变得混乱:
curl --silent -H 'host: helloworld' http://helloworld.demo.svc.cluster.local
{"host":"helloworld","user-agent":"curl/7.61.0","accept":"*/*","version":"v1"}
"version":"v2"
"version":"v2"
"version":"v1"
"version":"v1"
"version":"v2"
"version":"v2"
"version":"v1"
"version":"v2“
"version":"v1"
不仅我错过了所有 istio 属性(我在服务到服务通信中理解这些属性,因为据我所知,它们是在请求首次通过网关进入网格时设置的),而且我的余额看起来像kubernetes 服务的默认 50:50 余额。
我需要做什么才能在 inter-service 通信中达到相同的 1:9 平衡?我是否必须创建第二个 "internal" 网关来代替服务 fqdn?我错过了定义吗?从 pod 中调用服务 fqdn 是否应该遵守虚拟服务路由?
使用的istio版本是1.0.1,使用的kubernetes版本是v1.11.1。
更新 按照建议部署 sleep-pod(这次不依赖于演示命名空间的 auto-injection),而是按照睡眠示例
中的描述手动部署kubectl -n demo get deployment sleep -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
sleep 1 1 1 1 2m
sleep,istio-proxy tutum/curl,docker.io/istio/proxyv2:1.0.1 app=sleep
也把Virtualservice改成了0/100,一看能不能用。不幸的是,这并没有太大变化:
export SLEEP_POD=$(kubectl get -n demo pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user- agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v1"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v1"}
kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld
{"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"
路由规则在客户端进行评估,因此您需要确保您 运行 curl 来自的 pod 附加了一个 Istio sidecar。如果它只是直接调用服务,它无法评估您设置的 90-10 规则,而是只会掉到默认的 kube round-robin 路由。
Istio sleep sample 非常适合用作测试客户端 pod。
找到解决方案,先决条件之一(我忘记了)是正确的路由需要命名端口:@see https://istio.io/docs/setup/kubernetes/spec-requirements/。
错误:
spec:
ports:
- port: 80
protocol: TCP
targetPort: 3000
右:
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
使用名称 http 后,一切都像魅力一样