S3 public 第三方上传对象的读取权限受 IP 范围限制

S3 public read access restricted by IP range for object uploaded by third-party

我正在尝试完成以下场景:

1) 账户 A 将文件上传到账户 B 拥有的 S3 存储桶。上传时我指定账户所有者 B 的完全控制权

s3_client.upload_file(
    local_file, 
    bucket, 
    remote_file_name, 
    ExtraArgs={'GrantFullControl': 'id=<AccountB_CanonicalID>'}
)

2) 账户 B 定义了一个桶策略,通过 IP 限制对对象的访问(见下文)

{
    "Version": "2008-10-17",
    "Statement": [
        {
            "Sid": "AllowIPs",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::bucketB/*",
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": [
                        <CIDR1>,
                        <CIDR2>
                    ]
                }
            }
        }
    ]
}

如果我尝试以匿名用户身份下载文件,即使来自特定 IP 范围,我也会被拒绝访问。如果在上传时我为每个人添加 public 读取权限,那么我可以从任何 IP 下载文件。

s3_client.upload_file(
    local_file, bucket, 
    remote_file_name, 
    ExtraArgs={
        'GrantFullControl': 'id=AccountB_CanonicalID', GrantRead':'uri="http://acs.amazonaws.com/groups/global/AllUsers"'
    }
) 

问题:是否可以将文件从账户 A 上传到账户 B,但仍然限制 public 通过 IP 范围访问。

这是不可能的。根据 documentation:

Bucket Policy – For your bucket, you can add a bucket policy to grant other AWS accounts or IAM users permissions for the bucket and the objects in it. Any object permissions apply only to the objects that the bucket owner creates. Bucket policies supplement, and in many cases, replace ACL-based access policies.

但是,对于这种情况有一种解决方法。问题是上传文件的所有者是账户A。我们需要以文件所有者是账户B的方式上传文件。为了实现这一点,我们需要:

  1. 在账户 B 中为可信实体创建一个角色(select "Another AWS account" 并指定账户 A)。为桶添加上传权限。
  2. 在账户 A 中创建允许 AssumeRole 操作的策略,并作为资源指定在步骤 1 中创建的角色的 ARN。

要从 boto3 上传文件,您可以使用以下代码。请注意使用 cachetools 来处理临时凭证的有限 TTL。

import logging
import sys
import time

import boto3

from cachetools import cached, TTLCache

CREDENTIALS_TTL = 1800
credentials_cache = TTLCache(1, CREDENTIALS_TTL - 60)
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(message)s')
logger = logging.getLogger()


def main():
    local_file = sys.argv[1]
    bucket = '<bucket_from_account_B>'
    client = _get_s3_client_for_another_account()
    client.upload_file(local_file, bucket, local_file)
    logger.info('Uploaded to %s to %s' % (local_file, bucket))


@cached(credentials_cache)
def _get_s3_client_for_another_account():
    sts = boto3.client('sts')
    response = sts.assume_role(
        RoleArn='<arn_of_role_created_in_step_1>',
        DurationSeconds=CREDENTIALS_TTL
    )
    credentials = response['Credentials']
    credentials = {
        'aws_access_key_id': credentials['AccessKeyId'],
        'aws_secret_access_key': credentials['SecretAccessKey'],
        'aws_session_token': credentials['SessionToken'],
    }
    return boto3.client('s3', 'eu-central-1', **credentials)

if __name__ == '__main__':
    main()