如何使用网络策略限制 pod 仅连接到 2 pods 并以简单的方式在 k8s 中测试连接?
how to restrict a pod to connect only to 2 pods using networkpolicy and test connection in k8s in simple way?
我还需要通过 clusterip
服务公开 pod 吗?
有 3 个 pods - 主要,前面,api。我只需要允许从 pods- api 和前端到主 pod 的入口+出口连接。我还创建了 service-main - 在 port:80
.
上公开主 pod 的服务
不知道怎么测试,试过:
k exec main -it -- sh
netcan -z -v -w 5 service-main 80
和
k exec main -it -- sh
curl front:80
main.yaml 播客:
apiVersion: v1
kind: Pod
metadata:
labels:
app: main
item: c18
name: main
spec:
containers:
- image: busybox
name: main
command:
- /bin/sh
- -c
- sleep 1d
front.yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
app: front
name: front
spec:
containers:
- image: busybox
name: front
command:
- /bin/sh
- -c
- sleep 1d
api.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: api
name: api
spec:
containers:
- image: busybox
name: api
command:
- /bin/sh
- -c
- sleep 1d
主对前-networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: front-end-policy
spec:
podSelector:
matchLabels:
app: main
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: front
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: front
ports:
- port: 8080
我做错了什么?我还需要通过服务公开主 pod 吗?但是网络政策不应该已经解决这个问题了吗?
还有,我需要在主pod中写containerPort:80
吗?如何测试连通性并确保入口-出口仅适用于主 pod 到 api、前面 pods?
我尝试了 ckad 预科课程的实验室,它有 2 个 pods:secure-pod 和 web-pod。连接存在问题,解决方案是创建网络策略并使用 web-pod 容器内部的 netcat 进行测试:
k exec web-pod -it -- sh
nc -z -v -w 1 secure-service 80
connection open
更新:理想情况下我想要这些问题的答案:
service
和 networkpolicy
之间差异的清晰解释。
如果服务和 netpol 都存在 - traffic/request 通过的评估顺序是什么?它首先通过 netpol 然后通过服务?或者反之亦然?
如果我想要前端和 api pods 到 send/receive 流量到主 - 我需要单独的服务暴露前端和 api pods?
网络策略和服务是两个不同且独立的 Kubernetes 资源。
An abstract way to expose an application running on a set of Pods as a network service.
很好的解释from the Kubernetes docs:
Kubernetes Pods are created and destroyed to match the state of your cluster. Pods are nonpermanent resources. If you use a Deployment to run your app, it can create and destroy Pods dynamically.
Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.
This leads to a problem: if some set of Pods (call them "backends") provides functionality to other Pods (call them "frontends") inside your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload?
Enter Services.
还有一个很好的解释in this answer。
For production you should use a workload resources 而不是直接创建 pods:
Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.
Here are some examples of workload resources that manage one or more Pods:
并使用服务向您的应用程序发出请求。
网络政策are used to control traffic flow:
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
网络策略针对 pods,而不是服务(抽象)。检查 this answer and this one.
关于您的示例 - 您的网络策略是正确的(正如我在下面测试的那样)。问题是你的集群 :
For Network Policies to take effect, your cluster needs to run a network plugin which also enforces them. Project Calico or Cilium are plugins that do so. This is not the default when creating a cluster!
使用 Calico 插件在 kubeadm 集群上进行测试 -> 我创建了与您类似的 pods,但我更改了 container
部分:
spec:
containers:
- name: main
image: nginx
command: ["/bin/sh","-c"]
args: ["sed -i 's/listen .*/listen 8080;/g' /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"]
ports:
- containerPort: 8080
因此 NGINX 应用程序在 8080
端口可用。
让我们检查一下pods IP:
user@shell:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
api 1/1 Running 0 48m 192.168.156.61 example-ubuntu-kubeadm-template-2 <none> <none>
front 1/1 Running 0 48m 192.168.156.56 example-ubuntu-kubeadm-template-2 <none> <none>
main 1/1 Running 0 48m 192.168.156.52 example-ubuntu-kubeadm-template-2 <none> <none>
让我们 exec into running main
pod 并尝试向 front
pod 发出请求:
root@main:/# curl 192.168.156.61:8080
<!DOCTYPE html>
...
<title>Welcome to nginx!</title>
正在运行。
应用网络策略后:
user@shell:~$ kubectl apply -f main-to-front.yaml
networkpolicy.networking.k8s.io/front-end-policy created
user@shell:~$ kubectl exec -it main -- bash
root@main:/# curl 192.168.156.61:8080
...
不再起作用,这意味着网络策略已成功应用。
获取有关应用的网络策略的更多信息的好选择是 运行 kubectl describe
命令:
user@shell:~$ kubectl describe networkpolicy front-end-policy
Name: front-end-policy
Namespace: default
Created on: 2022-01-26 15:17:58 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=main
Allowing ingress traffic:
To Port: 8080/TCP
From:
PodSelector: app=front
Allowing egress traffic:
To Port: 8080/TCP
To:
PodSelector: app=front
Policy Types: Ingress, Egress
我还需要通过 clusterip
服务公开 pod 吗?
有 3 个 pods - 主要,前面,api。我只需要允许从 pods- api 和前端到主 pod 的入口+出口连接。我还创建了 service-main - 在 port:80
.
不知道怎么测试,试过:
k exec main -it -- sh
netcan -z -v -w 5 service-main 80
和
k exec main -it -- sh
curl front:80
main.yaml 播客:
apiVersion: v1
kind: Pod
metadata:
labels:
app: main
item: c18
name: main
spec:
containers:
- image: busybox
name: main
command:
- /bin/sh
- -c
- sleep 1d
front.yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
app: front
name: front
spec:
containers:
- image: busybox
name: front
command:
- /bin/sh
- -c
- sleep 1d
api.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: api
name: api
spec:
containers:
- image: busybox
name: api
command:
- /bin/sh
- -c
- sleep 1d
主对前-networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: front-end-policy
spec:
podSelector:
matchLabels:
app: main
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: front
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: front
ports:
- port: 8080
我做错了什么?我还需要通过服务公开主 pod 吗?但是网络政策不应该已经解决这个问题了吗?
还有,我需要在主pod中写containerPort:80
吗?如何测试连通性并确保入口-出口仅适用于主 pod 到 api、前面 pods?
我尝试了 ckad 预科课程的实验室,它有 2 个 pods:secure-pod 和 web-pod。连接存在问题,解决方案是创建网络策略并使用 web-pod 容器内部的 netcat 进行测试:
k exec web-pod -it -- sh
nc -z -v -w 1 secure-service 80
connection open
更新:理想情况下我想要这些问题的答案:
service
和networkpolicy
之间差异的清晰解释。 如果服务和 netpol 都存在 - traffic/request 通过的评估顺序是什么?它首先通过 netpol 然后通过服务?或者反之亦然?如果我想要前端和 api pods 到 send/receive 流量到主 - 我需要单独的服务暴露前端和 api pods?
网络策略和服务是两个不同且独立的 Kubernetes 资源。
An abstract way to expose an application running on a set of Pods as a network service.
很好的解释from the Kubernetes docs:
Kubernetes Pods are created and destroyed to match the state of your cluster. Pods are nonpermanent resources. If you use a Deployment to run your app, it can create and destroy Pods dynamically.
Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.
This leads to a problem: if some set of Pods (call them "backends") provides functionality to other Pods (call them "frontends") inside your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload?
Enter Services.
还有一个很好的解释in this answer。
For production you should use a workload resources 而不是直接创建 pods:
Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources. Here are some examples of workload resources that manage one or more Pods:
并使用服务向您的应用程序发出请求。
网络政策are used to control traffic flow:
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
网络策略针对 pods,而不是服务(抽象)。检查 this answer and this one.
关于您的示例 - 您的网络策略是正确的(正如我在下面测试的那样)。问题是你的集群
For Network Policies to take effect, your cluster needs to run a network plugin which also enforces them. Project Calico or Cilium are plugins that do so. This is not the default when creating a cluster!
使用 Calico 插件在 kubeadm 集群上进行测试 -> 我创建了与您类似的 pods,但我更改了 container
部分:
spec:
containers:
- name: main
image: nginx
command: ["/bin/sh","-c"]
args: ["sed -i 's/listen .*/listen 8080;/g' /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"]
ports:
- containerPort: 8080
因此 NGINX 应用程序在 8080
端口可用。
让我们检查一下pods IP:
user@shell:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
api 1/1 Running 0 48m 192.168.156.61 example-ubuntu-kubeadm-template-2 <none> <none>
front 1/1 Running 0 48m 192.168.156.56 example-ubuntu-kubeadm-template-2 <none> <none>
main 1/1 Running 0 48m 192.168.156.52 example-ubuntu-kubeadm-template-2 <none> <none>
让我们 exec into running main
pod 并尝试向 front
pod 发出请求:
root@main:/# curl 192.168.156.61:8080
<!DOCTYPE html>
...
<title>Welcome to nginx!</title>
正在运行。
应用网络策略后:
user@shell:~$ kubectl apply -f main-to-front.yaml
networkpolicy.networking.k8s.io/front-end-policy created
user@shell:~$ kubectl exec -it main -- bash
root@main:/# curl 192.168.156.61:8080
...
不再起作用,这意味着网络策略已成功应用。
获取有关应用的网络策略的更多信息的好选择是 运行 kubectl describe
命令:
user@shell:~$ kubectl describe networkpolicy front-end-policy
Name: front-end-policy
Namespace: default
Created on: 2022-01-26 15:17:58 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=main
Allowing ingress traffic:
To Port: 8080/TCP
From:
PodSelector: app=front
Allowing egress traffic:
To Port: 8080/TCP
To:
PodSelector: app=front
Policy Types: Ingress, Egress