Google Kubernetes Engine:NetworkPolicy 允许出口到 k8s-metadata-proxy
Google Kubernetes Engine: NetworkPolicy allowing egress to k8s-metadata-proxy
上下文
我有一个 Google Kubernetes Engine (GKE) 集群,Workload Identity enabled. As part of Workload Identity, a k8s-metadata-proxy DaemonSet 在该集群上运行。我有一个命名空间 my-namespace
并且想要拒绝命名空间中 pods 的所有出口流量,但到 k8s-metadata-proxy DaemonSet 的出口除外。因此,我有以下 NetworkPolicy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: my-namespace
spec:
# Apply to all pods.
podSelector: {}
policyTypes:
- Egress
egress:
- ports:
# This is needed to whitelist k8s-metadata-proxy. See https://github.com/GoogleCloudPlatform/k8s-metadata-proxy
- protocol: TCP
port: 988
问题
NetworkPolicy 过于宽泛,因为它允许出口 TCP 流量到端口 988 上的任何主机,而不仅仅是出口到 k8s-metadata-proxy DaemonSet,但我不能似乎找到了一种方法来指定 .spec.egress[0].to
以达到我想要的粒度。
我尝试了以下 to
s:
egress:
- to:
- namespaceSelector:
matchLabels:
namespace: kube-system
ports:
- protocol: TCP
port: 988
- to:
- ipBlock:
cidr: <cidr of pod IP range>
- ipBlock:
cidr: <cidr of services IP range>
ports:
- protocol: TCP
port: 988
但是这些规则会导致到 k8s-metadata-proxy 的流量被阻止。
问题
如何 select networking.k8s.io/v1/NetworkPolicy
出口规则 to
部分中的 k8s-metadata-proxy DaemonSet?
正如我在评论中所说:
Hello. You can add to your Egress definition podSelector.matchLabels to allow your pod to connect only to the Pods with specific label. You can read more about it here: cloud.google.com/kubernetes-engine/docs/tutorials/…
此评论可能会产生误导,因为官方文档中描述了与 gke-metadata-server
的通信:
重点关注以上文档部分:
Understanding the GKE metadata server
The GKE metadata server is a new metadata server designed for use with Kubernetes. It runs as a daemonset , with one Pod on each cluster node. The metadata server intercepts HTTP requests to http://metadata.google.internal (169.254.169.254:80
), including requests like GET /computeMetadata/v1/instance/service-accounts/default/token
to retrieve a token for the Google service account the Pod is configured to act as. Traffic to the metadata server never leaves the VM instance that hosts the Pod.
Note: If you have a strict cluster network policy in place, you must allow egress to 127.0.0.1/32 on port 988 so your Pod can communicate with the GKE metadata server.
仅允许流量到 GKE Metadata server
的规则在上述引用的最后一段中有所描述。 YAML
定义应如下所示:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: egress-rule
namespace: restricted-namespace # <- namespace your pod is in
spec:
policyTypes:
- Egress
podSelector:
matchLabels:
app: nginx # <- label used by pods trying to communicate with metadata server
egress:
- to:
- ipBlock:
cidr: 127.0.0.1/32 # <- allow communication with metadata server #1
- ports:
- protocol: TCP
port: 988 # <- allow communication with metadata server #2
假设:
- 您有一个 Kubernetes 集群:
Network Policy
启用
Workload Identity
启用
- 您的
Pods
正在尝试从 restricted-namespace
命名空间进行通信
描述所需的输出NetworkPolicy
:
$ kubectl describe networkpolicy -n restricted-namespace egress-rule
Name: egress-rule
Namespace: restricted-namespace
Created on: 2020-10-04 18:31:10 +0200 CEST
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.k8s.io/v1","kind":"NetworkPolicy","metadata":{"annotations":{},"name":"egress-rule","namespace":"restricted-name...
Spec:
PodSelector: app=nginx
Allowing ingress traffic:
<none> (Selected pods are isolated for ingress connectivity)
Allowing egress traffic:
To Port: <any> (traffic allowed to all ports)
To:
IPBlock:
CIDR: 127.0.0.1/32
Except:
----------
To Port: 988/TCP
To: <any> (traffic not restricted by source)
Policy Types: Egress
Disclaimer!
Applying those rules will deny all the traffic from pods with app=nginx
label not destined to the metadata server!
您可以通过以下方式创建并 exec
带有标签 app=nginx
的广告连播:
kubectl run -it --rm nginx \
--image=nginx \
--labels="app=nginx" \
--namespace=restricted-namespace \
-- /bin/bash
Tip!
Image nginx
is used as it's having curl
installed by default!
By this example you won't be able to communicate with DNS server. You can either:
- allow your pods to communicate with DNS server
- set the
env
variable for metadata server (169.254.169.254)
与GKE Metadata Server
通信的示例:
$ curl 169.254.169.254/computeMetadata/v1/instance/ -H 'Metadata-Flavor: Google'
attributes/
hostname
id
service-accounts/
zone
其他资源:
要允许特定 pods 仅将流量发送到特定端口上的特定 pods,您可以使用以下策略:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: egress-rule
namespace: restricted-namespace # <- namespace of "source" pod
spec:
policyTypes:
- Egress
podSelector:
matchLabels:
app: ubuntu # <- label for "source" pod
egress:
- to:
- podSelector:
matchLabels:
app: nginx # <- label for "destination" pod
- ports:
- protocol: TCP
port: 80 # <- allow only port 80
上下文
我有一个 Google Kubernetes Engine (GKE) 集群,Workload Identity enabled. As part of Workload Identity, a k8s-metadata-proxy DaemonSet 在该集群上运行。我有一个命名空间 my-namespace
并且想要拒绝命名空间中 pods 的所有出口流量,但到 k8s-metadata-proxy DaemonSet 的出口除外。因此,我有以下 NetworkPolicy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: my-namespace
spec:
# Apply to all pods.
podSelector: {}
policyTypes:
- Egress
egress:
- ports:
# This is needed to whitelist k8s-metadata-proxy. See https://github.com/GoogleCloudPlatform/k8s-metadata-proxy
- protocol: TCP
port: 988
问题
NetworkPolicy 过于宽泛,因为它允许出口 TCP 流量到端口 988 上的任何主机,而不仅仅是出口到 k8s-metadata-proxy DaemonSet,但我不能似乎找到了一种方法来指定 .spec.egress[0].to
以达到我想要的粒度。
我尝试了以下 to
s:
egress:
- to:
- namespaceSelector:
matchLabels:
namespace: kube-system
ports:
- protocol: TCP
port: 988
- to:
- ipBlock:
cidr: <cidr of pod IP range>
- ipBlock:
cidr: <cidr of services IP range>
ports:
- protocol: TCP
port: 988
但是这些规则会导致到 k8s-metadata-proxy 的流量被阻止。
问题
如何 select networking.k8s.io/v1/NetworkPolicy
出口规则 to
部分中的 k8s-metadata-proxy DaemonSet?
正如我在评论中所说:
Hello. You can add to your Egress definition podSelector.matchLabels to allow your pod to connect only to the Pods with specific label. You can read more about it here: cloud.google.com/kubernetes-engine/docs/tutorials/…
此评论可能会产生误导,因为官方文档中描述了与 gke-metadata-server
的通信:
重点关注以上文档部分:
Understanding the GKE metadata server
The GKE metadata server is a new metadata server designed for use with Kubernetes. It runs as a daemonset , with one Pod on each cluster node. The metadata server intercepts HTTP requests to http://metadata.google.internal (
169.254.169.254:80
), including requests likeGET /computeMetadata/v1/instance/service-accounts/default/token
to retrieve a token for the Google service account the Pod is configured to act as. Traffic to the metadata server never leaves the VM instance that hosts the Pod.Note: If you have a strict cluster network policy in place, you must allow egress to 127.0.0.1/32 on port 988 so your Pod can communicate with the GKE metadata server.
仅允许流量到 GKE Metadata server
的规则在上述引用的最后一段中有所描述。 YAML
定义应如下所示:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: egress-rule
namespace: restricted-namespace # <- namespace your pod is in
spec:
policyTypes:
- Egress
podSelector:
matchLabels:
app: nginx # <- label used by pods trying to communicate with metadata server
egress:
- to:
- ipBlock:
cidr: 127.0.0.1/32 # <- allow communication with metadata server #1
- ports:
- protocol: TCP
port: 988 # <- allow communication with metadata server #2
假设:
- 您有一个 Kubernetes 集群:
Network Policy
启用Workload Identity
启用
- 您的
Pods
正在尝试从restricted-namespace
命名空间进行通信
描述所需的输出NetworkPolicy
:
$ kubectl describe networkpolicy -n restricted-namespace egress-rule
Name: egress-rule
Namespace: restricted-namespace
Created on: 2020-10-04 18:31:10 +0200 CEST
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.k8s.io/v1","kind":"NetworkPolicy","metadata":{"annotations":{},"name":"egress-rule","namespace":"restricted-name...
Spec:
PodSelector: app=nginx
Allowing ingress traffic:
<none> (Selected pods are isolated for ingress connectivity)
Allowing egress traffic:
To Port: <any> (traffic allowed to all ports)
To:
IPBlock:
CIDR: 127.0.0.1/32
Except:
----------
To Port: 988/TCP
To: <any> (traffic not restricted by source)
Policy Types: Egress
Disclaimer!
Applying those rules will deny all the traffic from pods with
app=nginx
label not destined to the metadata server!
您可以通过以下方式创建并 exec
带有标签 app=nginx
的广告连播:
kubectl run -it --rm nginx \
--image=nginx \
--labels="app=nginx" \
--namespace=restricted-namespace \
-- /bin/bash
Tip!
Image
nginx
is used as it's havingcurl
installed by default!
By this example you won't be able to communicate with DNS server. You can either:
- allow your pods to communicate with DNS server
- set the
env
variable for metadata server (169.254.169.254)
与GKE Metadata Server
通信的示例:
$ curl 169.254.169.254/computeMetadata/v1/instance/ -H 'Metadata-Flavor: Google'
attributes/
hostname
id
service-accounts/
zone
其他资源:
要允许特定 pods 仅将流量发送到特定端口上的特定 pods,您可以使用以下策略:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: egress-rule
namespace: restricted-namespace # <- namespace of "source" pod
spec:
policyTypes:
- Egress
podSelector:
matchLabels:
app: ubuntu # <- label for "source" pod
egress:
- to:
- podSelector:
matchLabels:
app: nginx # <- label for "destination" pod
- ports:
- protocol: TCP
port: 80 # <- allow only port 80