matchExpressions 如何在 NetWorkPolicy 中工作
How does matchExpressions work in NetWorkPolicy
我有两个 pods,即工资单和 mysql,分别标记为 name=payroll
和 name=mysql
。还有另一个名为 internal 的 pod,标签为 name=internal
。我正在尝试允许从内部到其他两个 pods 的出口流量,同时允许所有入口流量。我的 NetworkPoliy
看起来像这样:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchExpressions:
- {key: name, operator: In, values: [payroll, mysql]}
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 3306
这与两个 pods 工资单和 mysql 不匹配。我做错了什么?
以下作品:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: payroll
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 3306
写 NetWorkPolicy
的最佳方式是什么?为什么第一个不正确?
我也想知道为什么to
字段是一个数组,而podSelector
里面也是一个数组?我的意思是他们是一样的吧?多个 podSelector
或多个 to
字段。使用其中之一有效。
This does not match the two pods payroll and mysql. What am I doing wrong?
- 我已经使用 pod-to-service 和 pod-to-pod 环境重现了您的场景,在这两种情况下两个 yamls 都工作得很好。也就是说,在第 19 行修复缩进后,
podSelector
应该处于同一级别,如下所示:
- to:
- podSelector:
matchLabels:
name: payroll
- podSelector:
matchLabels:
name: mysql
What is the best way to write a NetWorkPolicy
?
- 最佳做法取决于每种情况,最好为每条规则创建一个网络策略。我想说第一个 yaml 是最好的,如果你打算在两个 pods 上公开端口
8080
和 3306
,否则最好创建两个规则,以避免留下不必要的打开端口。
I also am wondering why the to
field is an array while the podSelector
is also an array inside it? I mean they are the same right? Multiple podSelector
or multiple to
fields. Using one of them works.
来自NetworkPolicySpec v1 networking API Ref:
egress
NetworkPolicyEgressRule array:
List of egress rules to be applied to the selected pods. Outgoing traffic is allowed if there are no NetworkPolicies selecting the pod, OR if the traffic matches at least one egress rule across all of the NetworkPolicy objects whose podSelector matches the pod.
另请记住,此列表还包括 Ports 数组。
Why is the first one incorrect?
- 两种规则基本相同,只是写法不同。我会说你应该检查是否有任何其他规则对相同的标签有效。
- 我建议您创建一个测试集群并尝试应用我将在下面留下的分步示例。
复制:
- 这个例子与你的情况非常相似。我正在使用
nginx
图像以便于测试,并将 NetworkPolicy
上的端口更改为 80
。我打电话给你的第一个 yaml internal-original.yaml
和你发布的第二个 second-internal.yaml
:
$ cat internal-original.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-original
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchExpressions:
- {key: name, operator: In, values: [payroll, mysql]}
ports:
- protocol: TCP
port: 80
$ cat second-internal.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: payroll
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 80
- 现在我们使用标签创建 pods 并公开服务:
$ kubectl run mysql --generator=run-pod/v1 --labels="name=mysql" --image=nginx
pod/mysql created
$ kubectl run internal --generator=run-pod/v1 --labels="name=internal" --image=nginx
pod/internal created
$ kubectl run payroll --generator=run-pod/v1 --labels="name=payroll" --image=nginx
pod/payroll created
$ kubectl run other --generator=run-pod/v1 --labels="name=other" --image=nginx
pod/other created
$ kubectl expose pod mysql --port=80
service/mysql exposed
$ kubectl expose pod payroll --port=80
service/payroll exposed
$ kubectl expose pod other --port=80
service/other exposed
- 现在,在应用
networkpolicy
之前,我将登录到 internal
pod 下载 wget
,因为之后外部访问将被阻止:
$ kubectl exec internal -it -- /bin/bash
root@internal:/# apt update
root@internal:/# apt install wget -y
root@internal:/# exit
- 由于您的规则阻止访问 DNS,我将列出 IP 并使用它们进行测试:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP
internal 1/1 Running 0 62s 10.244.0.192
mysql 1/1 Running 0 74s 10.244.0.141
other 1/1 Running 0 36s 10.244.0.216
payroll 1/1 Running 0 48s 10.244.0.17
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP 10.101.209.87 <none> 80/TCP 23s
other ClusterIP 10.103.39.7 <none> 80/TCP 9s
payroll ClusterIP 10.109.102.5 <none> 80/TCP 14s
- 现在让我们使用第一个 yaml 测试访问:
$ kubectl get networkpolicy
No resources found in default namespace.
$ kubectl apply -f internal-original.yaml
networkpolicy.networking.k8s.io/internal-original created
$ kubectl exec internal -it -- /bin/bash
root@internal:/# wget --spider --timeout=1 http://10.101.209.87
Spider mode enabled. Check if remote file exists.
--2020-06-08 18:17:55-- http://10.101.209.87/
Connecting to 10.101.209.87:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.109.102.5
Spider mode enabled. Check if remote file exists.
--2020-06-08 18:18:04-- http://10.109.102.5/
Connecting to 10.109.102.5:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.103.39.7
Spider mode enabled. Check if remote file exists.
--2020-06-08 18:18:08-- http://10.103.39.7/
Connecting to 10.103.39.7:80... failed: Connection timed out.
- 现在让我们使用第二个 yaml 测试访问:
$ kubectl get networkpolicy
NAME POD-SELECTOR AGE
internal-original name=internal 96s
$ kubectl delete networkpolicy internal-original
networkpolicy.networking.k8s.io "internal-original" deleted
$ kubectl apply -f second-internal.yaml
networkpolicy.networking.k8s.io/internal-policy created
$ kubectl exec internal -it -- /bin/bash
root@internal:/# wget --spider --timeout=1 http://10.101.209.87
Spider mode enabled. Check if remote file exists.
--2020-06-08 17:18:24-- http://10.101.209.87/
Connecting to 10.101.209.87:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.109.102.5
Spider mode enabled. Check if remote file exists.
--2020-06-08 17:18:30-- http://10.109.102.5/
Connecting to 10.109.102.5:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.103.39.7
Spider mode enabled. Check if remote file exists.
--2020-06-08 17:18:35-- http://10.103.39.7/
Connecting to 10.103.39.7:80... failed: Connection timed out.
- 如您所见,与带有标签的服务的连接正常,与带有其他标签的 pod 的连接失败。
注意:如果你希望允许pods解析DNS,你可以按照这个指南:Allow DNS Egress Traffic
如果您有任何问题,请在评论中告诉我。
我有两个 pods,即工资单和 mysql,分别标记为 name=payroll
和 name=mysql
。还有另一个名为 internal 的 pod,标签为 name=internal
。我正在尝试允许从内部到其他两个 pods 的出口流量,同时允许所有入口流量。我的 NetworkPoliy
看起来像这样:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchExpressions:
- {key: name, operator: In, values: [payroll, mysql]}
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 3306
这与两个 pods 工资单和 mysql 不匹配。我做错了什么?
以下作品:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: payroll
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 3306
写 NetWorkPolicy
的最佳方式是什么?为什么第一个不正确?
我也想知道为什么to
字段是一个数组,而podSelector
里面也是一个数组?我的意思是他们是一样的吧?多个 podSelector
或多个 to
字段。使用其中之一有效。
This does not match the two pods payroll and mysql. What am I doing wrong?
- 我已经使用 pod-to-service 和 pod-to-pod 环境重现了您的场景,在这两种情况下两个 yamls 都工作得很好。也就是说,在第 19 行修复缩进后,
podSelector
应该处于同一级别,如下所示:
- to:
- podSelector:
matchLabels:
name: payroll
- podSelector:
matchLabels:
name: mysql
What is the best way to write a
NetWorkPolicy
?
- 最佳做法取决于每种情况,最好为每条规则创建一个网络策略。我想说第一个 yaml 是最好的,如果你打算在两个 pods 上公开端口
8080
和3306
,否则最好创建两个规则,以避免留下不必要的打开端口。
I also am wondering why the
to
field is an array while thepodSelector
is also an array inside it? I mean they are the same right? MultiplepodSelector
or multipleto
fields. Using one of them works.
来自NetworkPolicySpec v1 networking API Ref:
egress
NetworkPolicyEgressRule array: List of egress rules to be applied to the selected pods. Outgoing traffic is allowed if there are no NetworkPolicies selecting the pod, OR if the traffic matches at least one egress rule across all of the NetworkPolicy objects whose podSelector matches the pod.另请记住,此列表还包括 Ports 数组。
Why is the first one incorrect?
- 两种规则基本相同,只是写法不同。我会说你应该检查是否有任何其他规则对相同的标签有效。
- 我建议您创建一个测试集群并尝试应用我将在下面留下的分步示例。
复制:
- 这个例子与你的情况非常相似。我正在使用
nginx
图像以便于测试,并将NetworkPolicy
上的端口更改为80
。我打电话给你的第一个 yamlinternal-original.yaml
和你发布的第二个second-internal.yaml
:
$ cat internal-original.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-original
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchExpressions:
- {key: name, operator: In, values: [payroll, mysql]}
ports:
- protocol: TCP
port: 80
$ cat second-internal.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- to:
- podSelector:
matchLabels:
name: payroll
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 80
- 现在我们使用标签创建 pods 并公开服务:
$ kubectl run mysql --generator=run-pod/v1 --labels="name=mysql" --image=nginx
pod/mysql created
$ kubectl run internal --generator=run-pod/v1 --labels="name=internal" --image=nginx
pod/internal created
$ kubectl run payroll --generator=run-pod/v1 --labels="name=payroll" --image=nginx
pod/payroll created
$ kubectl run other --generator=run-pod/v1 --labels="name=other" --image=nginx
pod/other created
$ kubectl expose pod mysql --port=80
service/mysql exposed
$ kubectl expose pod payroll --port=80
service/payroll exposed
$ kubectl expose pod other --port=80
service/other exposed
- 现在,在应用
networkpolicy
之前,我将登录到internal
pod 下载wget
,因为之后外部访问将被阻止:
$ kubectl exec internal -it -- /bin/bash
root@internal:/# apt update
root@internal:/# apt install wget -y
root@internal:/# exit
- 由于您的规则阻止访问 DNS,我将列出 IP 并使用它们进行测试:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP
internal 1/1 Running 0 62s 10.244.0.192
mysql 1/1 Running 0 74s 10.244.0.141
other 1/1 Running 0 36s 10.244.0.216
payroll 1/1 Running 0 48s 10.244.0.17
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP 10.101.209.87 <none> 80/TCP 23s
other ClusterIP 10.103.39.7 <none> 80/TCP 9s
payroll ClusterIP 10.109.102.5 <none> 80/TCP 14s
- 现在让我们使用第一个 yaml 测试访问:
$ kubectl get networkpolicy
No resources found in default namespace.
$ kubectl apply -f internal-original.yaml
networkpolicy.networking.k8s.io/internal-original created
$ kubectl exec internal -it -- /bin/bash
root@internal:/# wget --spider --timeout=1 http://10.101.209.87
Spider mode enabled. Check if remote file exists.
--2020-06-08 18:17:55-- http://10.101.209.87/
Connecting to 10.101.209.87:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.109.102.5
Spider mode enabled. Check if remote file exists.
--2020-06-08 18:18:04-- http://10.109.102.5/
Connecting to 10.109.102.5:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.103.39.7
Spider mode enabled. Check if remote file exists.
--2020-06-08 18:18:08-- http://10.103.39.7/
Connecting to 10.103.39.7:80... failed: Connection timed out.
- 现在让我们使用第二个 yaml 测试访问:
$ kubectl get networkpolicy
NAME POD-SELECTOR AGE
internal-original name=internal 96s
$ kubectl delete networkpolicy internal-original
networkpolicy.networking.k8s.io "internal-original" deleted
$ kubectl apply -f second-internal.yaml
networkpolicy.networking.k8s.io/internal-policy created
$ kubectl exec internal -it -- /bin/bash
root@internal:/# wget --spider --timeout=1 http://10.101.209.87
Spider mode enabled. Check if remote file exists.
--2020-06-08 17:18:24-- http://10.101.209.87/
Connecting to 10.101.209.87:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.109.102.5
Spider mode enabled. Check if remote file exists.
--2020-06-08 17:18:30-- http://10.109.102.5/
Connecting to 10.109.102.5:80... connected.
HTTP request sent, awaiting response... 200 OK
root@internal:/# wget --spider --timeout=1 http://10.103.39.7
Spider mode enabled. Check if remote file exists.
--2020-06-08 17:18:35-- http://10.103.39.7/
Connecting to 10.103.39.7:80... failed: Connection timed out.
- 如您所见,与带有标签的服务的连接正常,与带有其他标签的 pod 的连接失败。
注意:如果你希望允许pods解析DNS,你可以按照这个指南:Allow DNS Egress Traffic
如果您有任何问题,请在评论中告诉我。