如何配置 istio 虚拟服务在 2 pods 之间进行 tcp 通信?
How to config istio virtualservice to do tcp traffic between 2 pods?
我有一个服务器应用程序侦听 8000 端口和一个与服务器建立 tcp 连接的客户端应用程序。我想使用 istio sidecar 来重定向 tcp 流量,然后我这样做:
- 将客户端连接地址从 server_ip:8000 更改为 localhost:8000
- 为服务器编写k8s部署和服务:
apiVersion: v1
kind: Service
metadata:
name: hello-server
labels:
app: hello-server
service: hello-server
spec:
ports:
- name: tcp
port: 8000
selector:
app: hello-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-server
labels:
app: hello-server
spec:
replicas: 1
selector:
matchLabels:
app: hello-server
template:
metadata:
labels:
app: hello-server
spec:
containers:
- name: hello-server
image: server_test
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
- 为客户端编写 k8s 部署:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-client
labels:
app: hello-client
spec:
replicas: 1
selector:
matchLabels:
app: hello-client
template:
metadata:
labels:
app: hello-client
spec:
containers:
- name: hello-client
image: client_test
imagePullPolicy: IfNotPresent
虚拟服务 yaml 应该是什么?
if without the gateway I don't know what it should be, the server pod hostname?
所述
The virtual service hostname can be an IP address, a DNS name, or, depending on the platform, a short name (such as a Kubernetes service short name) that resolves, implicitly or explicitly, to a fully qualified domain name (FQDN). You can also use wildcard (”*”) prefixes, letting you create a single set of routing rules for all matching services. Virtual service hosts don’t actually have to be part of the Istio service registry, they are simply virtual destinations. This lets you model traffic for virtual hosts that don’t have routable entries inside the mesh.
所以它应该是您的服务的完整 FQDN 名称或短名称。
在您的示例中,它将是 hello-server.default.svc.cluster.local
或 hello-server
。
因此适当的虚拟服务如下所示。
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tcp-echo
spec:
hosts:
- "hello-server.default.svc.cluster.local"
tcp:
- match:
- port: 8000
route:
- destination:
host: hello-server.default.svc.cluster.local
port:
number: 8000
或
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tcp-echo
spec:
hosts:
- "hello-server"
tcp:
- match:
- port: 8000
route:
- destination:
host: hello-server
port:
number: 8000
有一个 example 的 tcp 服务器侦听端口 9000,版本为 v1 和 v2,使用来自注入 pod 的 netcat 进行了测试。
具有 v1、v2 版本和服务的 Tcp 服务器。
apiVersion: v1
kind: Service
metadata:
name: tcp-echo
labels:
app: tcp-echo
service: tcp-echo
spec:
ports:
- name: tcp
port: 9000
selector:
app: tcp-echo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcp-echo-v1
labels:
app: tcp-echo
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: tcp-echo
version: v1
template:
metadata:
labels:
app: tcp-echo
version: v1
spec:
containers:
- name: tcp-echo
image: docker.io/istio/tcp-echo-server:1.2
imagePullPolicy: IfNotPresent
args: [ "9000", "one" ]
ports:
- containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcp-echo-v2
labels:
app: tcp-echo
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: tcp-echo
version: v2
template:
metadata:
labels:
app: tcp-echo
version: v2
spec:
containers:
- name: tcp-echo
image: docker.io/istio/tcp-echo-server:1.2
imagePullPolicy: IfNotPresent
args: [ "9000", "two" ]
ports:
- containerPort: 9000
虚拟服务将 99% 的流量发送到 v1 版本的服务器。
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tcp-echo
spec:
hosts:
- "tcp-echo.default.svc.cluster.local"
tcp:
- match:
- port: 9000
route:
- destination:
host: tcp-echo
port:
number: 9000
subset: v1
weight: 99
- destination:
host: tcp-echo
port:
number: 9000
subset: v2
weight: 1
使用 netcat 从注入的 ubuntu pod 进行测试。
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
two world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
产品页面和评论服务之间有 iptables schematic。
让工作负载(self app)连接到“localhost:port”并希望 istio-proxy(envoy sidecar)将其重定向出去是不正确的用法,至少是当前的 istio 版本(1.7)。 istio iptables configure shell 通过以下方式禁止这样做:
# Do not redirect app calls to back itself via Envoy when using the endpoint address
# e.g. appN => appN by lo
iptables -t nat -A ISTIO_OUTPUT -o lo -m owner ! --gid-owner "${gid}" -j RETURN
我有一个服务器应用程序侦听 8000 端口和一个与服务器建立 tcp 连接的客户端应用程序。我想使用 istio sidecar 来重定向 tcp 流量,然后我这样做:
- 将客户端连接地址从 server_ip:8000 更改为 localhost:8000
- 为服务器编写k8s部署和服务:
apiVersion: v1
kind: Service
metadata:
name: hello-server
labels:
app: hello-server
service: hello-server
spec:
ports:
- name: tcp
port: 8000
selector:
app: hello-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-server
labels:
app: hello-server
spec:
replicas: 1
selector:
matchLabels:
app: hello-server
template:
metadata:
labels:
app: hello-server
spec:
containers:
- name: hello-server
image: server_test
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
- 为客户端编写 k8s 部署:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-client
labels:
app: hello-client
spec:
replicas: 1
selector:
matchLabels:
app: hello-client
template:
metadata:
labels:
app: hello-client
spec:
containers:
- name: hello-client
image: client_test
imagePullPolicy: IfNotPresent
虚拟服务 yaml 应该是什么?
所述if without the gateway I don't know what it should be, the server pod hostname?
The virtual service hostname can be an IP address, a DNS name, or, depending on the platform, a short name (such as a Kubernetes service short name) that resolves, implicitly or explicitly, to a fully qualified domain name (FQDN). You can also use wildcard (”*”) prefixes, letting you create a single set of routing rules for all matching services. Virtual service hosts don’t actually have to be part of the Istio service registry, they are simply virtual destinations. This lets you model traffic for virtual hosts that don’t have routable entries inside the mesh.
所以它应该是您的服务的完整 FQDN 名称或短名称。
在您的示例中,它将是 hello-server.default.svc.cluster.local
或 hello-server
。
因此适当的虚拟服务如下所示。
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tcp-echo
spec:
hosts:
- "hello-server.default.svc.cluster.local"
tcp:
- match:
- port: 8000
route:
- destination:
host: hello-server.default.svc.cluster.local
port:
number: 8000
或
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tcp-echo
spec:
hosts:
- "hello-server"
tcp:
- match:
- port: 8000
route:
- destination:
host: hello-server
port:
number: 8000
有一个 example 的 tcp 服务器侦听端口 9000,版本为 v1 和 v2,使用来自注入 pod 的 netcat 进行了测试。
具有 v1、v2 版本和服务的 Tcp 服务器。
apiVersion: v1
kind: Service
metadata:
name: tcp-echo
labels:
app: tcp-echo
service: tcp-echo
spec:
ports:
- name: tcp
port: 9000
selector:
app: tcp-echo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcp-echo-v1
labels:
app: tcp-echo
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: tcp-echo
version: v1
template:
metadata:
labels:
app: tcp-echo
version: v1
spec:
containers:
- name: tcp-echo
image: docker.io/istio/tcp-echo-server:1.2
imagePullPolicy: IfNotPresent
args: [ "9000", "one" ]
ports:
- containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcp-echo-v2
labels:
app: tcp-echo
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: tcp-echo
version: v2
template:
metadata:
labels:
app: tcp-echo
version: v2
spec:
containers:
- name: tcp-echo
image: docker.io/istio/tcp-echo-server:1.2
imagePullPolicy: IfNotPresent
args: [ "9000", "two" ]
ports:
- containerPort: 9000
虚拟服务将 99% 的流量发送到 v1 版本的服务器。
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tcp-echo
spec:
hosts:
- "tcp-echo.default.svc.cluster.local"
tcp:
- match:
- port: 9000
route:
- destination:
host: tcp-echo
port:
number: 9000
subset: v1
weight: 99
- destination:
host: tcp-echo
port:
number: 9000
subset: v2
weight: 1
使用 netcat 从注入的 ubuntu pod 进行测试。
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
two world
^C
root@ubu1:/# sh -c "echo world | nc tcp-echo 9000"
one world
产品页面和评论服务之间有 iptables schematic。
让工作负载(self app)连接到“localhost:port”并希望 istio-proxy(envoy sidecar)将其重定向出去是不正确的用法,至少是当前的 istio 版本(1.7)。 istio iptables configure shell 通过以下方式禁止这样做:
# Do not redirect app calls to back itself via Envoy when using the endpoint address
# e.g. appN => appN by lo
iptables -t nat -A ISTIO_OUTPUT -o lo -m owner ! --gid-owner "${gid}" -j RETURN