当我的角色似乎具有正确的权限时,如何解决基于角色的问题?

How do I fix a role-based problem when my role appears to have the correct permissions?

我正在尝试在 Kubernetes 中建立命名空间“sandbox”,并且已经使用了好几天没有问题。今天我收到以下错误。
我已检查以确保我拥有所有必需的配置映射。

是否有日志或其他东西可以让我找到它指的是什么?

panic: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

我确实找到了这个 (MountVolume.SetUp failed for volume "kube-api-access-fcz9j" : object "default"/"kube-root-ca.crt" not registered) 线程并将以下补丁应用于我的服务帐户,但我仍然遇到同样的错误。

automountServiceAccountToken: false

更新: 在回答@p10l 时,我正在使用裸机集群版本 1.23.0。没有地形。

我越来越近了,但还是没有。

这似乎是另一个 RBAC 问题,但这个错误对我来说没有意义。

我有一个用户“dma”。我是 运行 使用上下文 dma@kubernetes

的“沙箱”命名空间中的工作流

现在的错误是

Create request failed: workflows.argoproj.io is forbidden: User "dma" cannot create resource "workflows" in API group "argoproj.io" in the namespace "sandbox"

但该用户似乎确实拥有正确的权限。

这是输出 kubectl get role dma -n sandbox -o yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"dma","namespace":"sandbox"},"rules":[{"apiGroups":["","apps","autoscaling","batch","extensions","policy","rbac.authorization.k8s.io","argoproj.io"],"resources":["pods","configmaps","deployments","events","pods","persistentvolumes","persistentvolumeclaims","services","workflows"],"verbs":["get","list","watch","create","update","patch","delete"]}]}
  creationTimestamp: "2021-12-21T19:41:38Z"
  name: dma
  namespace: sandbox
  resourceVersion: "1055045"
  uid: 94191881-895d-4457-9764-5db9b54cdb3f
rules:
- apiGroups:
  - ""
  - apps
  - autoscaling
  - batch
  - extensions
  - policy
  - rbac.authorization.k8s.io
  - argoproj.io
  - workflows.argoproj.io
  resources:
  - pods
  - configmaps
  - deployments
  - events
  - pods
  - persistentvolumes
  - persistentvolumeclaims
  - services
  - workflows
  verbs:
  - get
  - list
  - watch
  - create
  - update
  - patch
  - delete

这是 kubectl get rolebinding -n sandbox dma-sandbox-rolebinding -o yaml 的输出

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"dma-sandbox-rolebinding","namespace":"sandbox"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"dma"},"subjects":[{"kind":"ServiceAccount","name":"dma","namespace":"sandbox"}]}
  creationTimestamp: "2021-12-21T19:56:06Z"
  name: dma-sandbox-rolebinding
  namespace: sandbox
  resourceVersion: "1050593"
  uid: d4d53855-b5fc-4f29-8dbd-17f682cc91dd
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: dma
subjects:
- kind: ServiceAccount
  name: dma
  namespace: sandbox

您描述的问题是一个反复出现的问题,here and 您的集群缺少 KUBECONFIG 环境变量。

首先,运行 echo $KUBECONFIG 在所有节点上查看它是否为空。 如果是,请在您的集群中查找配置文件,然后将其复制到所有节点,然后通过 运行ning export KUBECONFIG=/path/to/config 导出此变量。该文件通常可以在主节点上的 ~/.kube/config/ or /etc/kubernetes/admin.conf` 中找到。

如果此解决方案适用于您的情况,请告诉我。