授予 Kubernetes 服务帐户权限以从所有命名空间获取 pods

Grant Kubernetes service account privileges to get pods from all namespaces

我想授予 Kubernetes 服务帐户执行 kubectl --token $token get pod --all-namespaces 的权限。我很熟悉为单个命名空间执行此操作,但不知道如何为所有命名空间执行此操作(包括将来可能创建的新命名空间并且不授予服务帐户 full admin privileges)。

目前我收到此错误消息:

Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:kube-system:test" cannot list resource "pods" in API group "" at the cluster scope

需要什么(集群)角色和角色绑定?

UPDATE 使用以下 ClusterRoleBinding 将角色 view 分配给服务是可行的,这是向前迈出的一步。但是,我想将服务帐户的权限进一步限制在所需的最低限度。

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test
subjects:
- kind: ServiceAccount
  name: test
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io

可以按如下方式提取服务帐户的令牌:

secret=$(kubectl get serviceaccount test -n kube-system -o=jsonpath='{.secrets[0].name}')
token=$(kubectl get secret $secret -n kube-system -o=jsonpath='{.data.token}' | base64 --decode -)

ClustRole & ClusterRoleBinding 当你需要所有命名空间时是正确的,只需缩小权限:

kind: ServiceAccount
apiVersion: v1
metadata:
  name: all-ns-pod-get
  namespace: your-ns

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: all-ns-pod-get
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list"]

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: all-ns-pod-get
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: all-ns-pod-get
subjects:
- kind: ServiceAccount
  name: all-ns-pod-get

然后命名空间 your-ns 中的所有 pods 将自动挂载一个 k8s 令牌。您可以在 pod 中使用裸 kubectl 或 k8s sdk 而无需传递任何秘密。请注意,您 不需要 传递 --token,只需 运行 您创建该 ServiceAccount 的名称空间内的 pod 中的命令。

这里有一篇解释概念的好文章https://medium.com/@ishagirdhar/rbac-in-kubernetes-demystified-72424901fcb3

  1. 按照以下 yaml 并创建测试服务帐户。
apiVersion: v1
kind: ServiceAccount
metadata:
  name: test
  namespace: default

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]


kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test
subjects:
- kind: ServiceAccount
  name: test
  namespace: default
roleRef:
  kind: ClusterRole
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

从下面的示例中部署测试 pod

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: test
  name: test
spec:
  serviceAccountName: test
  containers:
  - args:
    - sleep
    - "10000"
    image: alpine
    imagePullPolicy: IfNotPresent
    name: test
    resources:
      requests:
        memory: 100Mi
  1. 安装 curl 和 kubectl
kubectl exec test apk add curl
kubectl exec test -- curl -o /bin/kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl
kubectl exec test -- sh -c 'chmod +x /bin/kubectl'
  1. 您应该能够从测试 pod
  2. 的所有命名空间中列出 pods
master $ kubectl exec test -- sh -c 'kubectl get pods --all-namespaces'
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
app1          nginx-6f858d4d45-m2w6f           1/1     Running   0          19m
app1          nginx-6f858d4d45-rdvht           1/1     Running   0          19m
app1          nginx-6f858d4d45-sqs58           1/1     Running   0          19m
app1          test                             1/1     Running   0          18m
app2          nginx-6f858d4d45-6rrfl           1/1     Running   0          19m
app2          nginx-6f858d4d45-djz4b           1/1     Running   0          19m
app2          nginx-6f858d4d45-mvscr           1/1     Running   0          19m
app3          nginx-6f858d4d45-88rdt           1/1     Running   0          19m
app3          nginx-6f858d4d45-lfjx2           1/1     Running   0          19m
app3          nginx-6f858d4d45-szfdd           1/1     Running   0          19m
default       test                             1/1     Running   0          6m
kube-system   coredns-78fcdf6894-g7l6n         1/1     Running   0          33m
kube-system   coredns-78fcdf6894-r87mx         1/1     Running   0          33m
kube-system   etcd-master                      1/1     Running   0          32m
kube-system   kube-apiserver-master            1/1     Running   0          32m
kube-system   kube-controller-manager-master   1/1     Running   0          32m
kube-system   kube-proxy-vnxb7                 1/1     Running   0          33m
kube-system   kube-proxy-vwt6z                 1/1     Running   0          33m
kube-system   kube-scheduler-master            1/1     Running   0          32m
kube-system   weave-net-d5dk8                  2/2     Running   1          33m
kube-system   weave-net-qjt76                  2/2     Running   1          33m