EKS 的 AWS SSO 授权无法调用 sts:AssumeRole

AWS SSO authorization for EKS fails to call sts:AssumeRole

我正在迁移到 AWS SSO 以进行 cli 访问,到目前为止,它适用于除 kubectl 之外的所有内容。 在对它进行故障排除时,我遵循了一些指南,这意味着我最终遇到了一些货物崇拜行为,而且我的心智模型中显然遗漏了一些东西。

aws sts get-caller-identity
{
    "UserId": "<redacted>",
    "Account": "<redacted>",
    "Arn": "arn:aws:sts::<redacted>:assumed-role/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87/<my username>"
}

kubectl get pods

An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts:::assumed-role/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87/ is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam:::role/aws-reserved/sso.amazonaws.com/us-east-2/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87

有趣的是,它似乎试图承担它已经在使用的相同角色,但我不确定如何修复它。

~/.aws/config(子集 - 我还有其他配置文件,但它们与此处无关)

[default]
region = us-east-2
output = json

[profile default]
sso_start_url = https://<redacted>.awsapps.com/start
sso_account_id = <redacted>
sso_role_name = DeveloperReadonly
region = us-east-2
sso_region = us-east-2
output = json

~/.kube/config(删除了簇)

apiVersion: v1
contexts:
- context:
    cluster: arn:aws:eks:us-east-2:<redacted>:cluster/foo
    user: ro
  name: ro
current-context: ro
kind: Config
preferences: {}
users:
- name: ro
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-east-2
      - eks
      - get-token
      - --cluster-name
      - foo
      - --role
      - arn:aws:iam::<redacted>:role/aws-reserved/sso.amazonaws.com/us-east-2/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87
      command: aws
      env: null

aws-auth mapRoles 片段

- rolearn: arn:aws:iam::<redacted>:role/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87
  username: "devread:{{SessionName}}"
  groups:
    - view

我错过了什么明显的东西?我查看了其他有类似问题的 Whosebug 帖子,但是 none 有 arn:aws:sts:::assumed-role -> arn: aws:iam:::role 路径。

.aws/config 有一个微妙的错误 - [profile default] 没有意义,所以这两个块应该合并到 [default] .只有非默认配置文件的名称中应包含配置文件。

[default]
sso_start_url = https://<redacted>.awsapps.com/start
sso_account_id = <redacted>
sso_role_name = DeveloperReadonly
region = us-east-2
sso_region = us-east-2
output = json

[profile rw]
sso_start_url = https://<redacted>.awsapps.com/start
sso_account_id = <redacted>
sso_role_name = DeveloperReadWrite
region = us-east-2
sso_region = us-east-2
output = json

我还更改了 .kube/config 以根据配置文件获取令牌,而不是明确命名角色。这修复了 AssumeRole 失败,因为它使用了现有角色。

apiVersion: v1
contexts:
- context:
    cluster: arn:aws:eks:us-east-2:<redacted>:cluster/foo
    user: ro
  name: ro
current-context: ro
kind: Config
preferences: {}
users:
- name: ro
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-east-2
      - eks
      - get-token
      - --cluster-name
      - foo
      - --profile
      - default
      command: aws
      env: null

我现在可以 运行 kubectl config use-context ro 或我定义的其他配置文件(为简洁起见省略)。

在相关说明中,由于 s3 后端不处理 sso,我在使用旧版 terraform 时遇到了一些麻烦。 aws-vault 帮我解决了这个问题