RBAC 规则不适用于 Kubeadm 集群
RBAC rules not working in cluster with Kubeadm
在我们客户的一个 kubernetes 集群(v1.16.8 和 kubeadm)中,RBAC 根本不起作用。我们使用以下 yaml 创建了一个 ServiceAccount、只读的 ClusterRole 和 ClusterRoleBinding,但是当我们通过仪表板或 kubectl 登录时,用户几乎可以在集群中执行任何操作。什么会导致此问题?
kind: ServiceAccount
apiVersion: v1
metadata:
name: read-only-user
namespace: permission-manager
secrets:
- name: read-only-user-token-7cdx2
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-only-user___template-namespaced-resources___read-only___all_namespaces
labels:
generated_for_user: ''
subjects:
- kind: ServiceAccount
name: read-only-user
namespace: permission-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: template-namespaced-resources___read-only
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: template-namespaced-resources___read-only
rules:
- verbs:
- get
- list
- watch
apiGroups:
- '*'
resources:
- configmaps
- endpoints
- persistentvolumeclaims
- pods
- pods/log
- pods/portforward
- podtemplates
- replicationcontrollers
- resourcequotas
- secrets
- services
- events
- daemonsets
- deployments
- replicasets
- ingresses
- networkpolicies
- poddisruptionbudgets
这里是集群的kube-apiserver.yaml文件内容:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.1.42
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver:v1.16.8
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.1.42
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
您定义的只是控制服务帐户。这是经过测试的规格;使用以下内容创建一个 yaml 文件:
apiVersion: v1
kind: Namespace
metadata:
name: test
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
namespace: test
---
kind: ClusterRoleBinding # <-- REMINDER: Cluster wide and not namespace specific. Use RoleBinding for namespace specific.
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-role-binding
subjects:
- kind: ServiceAccount
name: test-sa
namespace: test
- kind: User
name: someone
apiGroup: rbac.authorization.k8s.io
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: test-cluster-role
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-cluster-role
rules:
- verbs:
- get
- list
- watch
apiGroups:
- '*'
resources:
- configmaps
- endpoints
- persistentvolumeclaims
- pods
- pods/log
- pods/portforward
- podtemplates
- replicationcontrollers
- resourcequotas
- secrets
- services
- events
- daemonsets
- deployments
- replicasets
- ingresses
- networkpolicies
- poddisruptionbudgets
应用以上规范:kubectl apply -f <filename>.yaml
按预期工作:
删除测试资源:kubectl delete -f <filename>.yaml
在我们客户的一个 kubernetes 集群(v1.16.8 和 kubeadm)中,RBAC 根本不起作用。我们使用以下 yaml 创建了一个 ServiceAccount、只读的 ClusterRole 和 ClusterRoleBinding,但是当我们通过仪表板或 kubectl 登录时,用户几乎可以在集群中执行任何操作。什么会导致此问题?
kind: ServiceAccount
apiVersion: v1
metadata:
name: read-only-user
namespace: permission-manager
secrets:
- name: read-only-user-token-7cdx2
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-only-user___template-namespaced-resources___read-only___all_namespaces
labels:
generated_for_user: ''
subjects:
- kind: ServiceAccount
name: read-only-user
namespace: permission-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: template-namespaced-resources___read-only
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: template-namespaced-resources___read-only
rules:
- verbs:
- get
- list
- watch
apiGroups:
- '*'
resources:
- configmaps
- endpoints
- persistentvolumeclaims
- pods
- pods/log
- pods/portforward
- podtemplates
- replicationcontrollers
- resourcequotas
- secrets
- services
- events
- daemonsets
- deployments
- replicasets
- ingresses
- networkpolicies
- poddisruptionbudgets
这里是集群的kube-apiserver.yaml文件内容:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.1.42
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver:v1.16.8
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.1.42
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
您定义的只是控制服务帐户。这是经过测试的规格;使用以下内容创建一个 yaml 文件:
apiVersion: v1
kind: Namespace
metadata:
name: test
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
namespace: test
---
kind: ClusterRoleBinding # <-- REMINDER: Cluster wide and not namespace specific. Use RoleBinding for namespace specific.
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-role-binding
subjects:
- kind: ServiceAccount
name: test-sa
namespace: test
- kind: User
name: someone
apiGroup: rbac.authorization.k8s.io
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: test-cluster-role
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-cluster-role
rules:
- verbs:
- get
- list
- watch
apiGroups:
- '*'
resources:
- configmaps
- endpoints
- persistentvolumeclaims
- pods
- pods/log
- pods/portforward
- podtemplates
- replicationcontrollers
- resourcequotas
- secrets
- services
- events
- daemonsets
- deployments
- replicasets
- ingresses
- networkpolicies
- poddisruptionbudgets
应用以上规范:kubectl apply -f <filename>.yaml
按预期工作:
删除测试资源:kubectl delete -f <filename>.yaml