使用 boto3 从无服务器 lambda 函数调用 aws s3 存储桶时出现 AccessDenied 错误消息
AccessDenied error message when calling aws s3 buckets from serverless lambda function with boto3
我正在使用 Amazon AWS 构建无服务器应用程序。我现在正在测试 boto3 以从我的 aws s3 服务获取存储桶列表。尽管我的 IAM 用户具有 AdministratorAccess 访问权限,但每次我尝试调用我的 lambda 函数时,它都会显示一条错误消息。有人可以帮我吗?感谢您的关注。这是我的错误信息
{
"stackTrace": [
[
"/var/task/handler.py",
9,
"hello",
"for bucket in s3.buckets.all():"
],
[
"/var/runtime/boto3/resources/collection.py",
83,
"__iter__",
"for page in self.pages():"
],
[
"/var/runtime/boto3/resources/collection.py",
161,
"pages",
"pages = [getattr(client, self._py_operation_name)(**params)]"
],
[
"/var/runtime/botocore/client.py",
312,
"_api_call",
"return self._make_api_call(operation_name, kwargs)"
],
[
"/var/runtime/botocore/client.py",
605,
"_make_api_call",
"raise error_class(parsed_response, operation_name)"
]
],
"errorType": "ClientError",
"errorMessage": "An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied"
}
这是我的 lambda 函数,handler.py
import json
import boto3
def hello(event, context):
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
body = {
"message": "gg"
}
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
这是我的 serverless.yml 文件
# Welcome to Serverless!
#
# This file is the main config file for your service.
# It's very minimal at this point and uses default values.
# You can always add more config options for more control.
# We've included some commented out config examples here.
# Just uncomment any of them to get that config option.
#
# For full config options, check the docs:
# docs.serverless.com
#
# Happy Coding!
service: serverless-boto3
# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
# frameworkVersion: "=X.X.X"
provider:
name: aws
runtime: python2.7
# you can overwrite defaults here
# stage: dev
# region: us-east-1
# you can add statements to the Lambda function's IAM Role here
# iamRoleStatements:
# - Effect: "Allow"
# Action:
# - "s3:ListBucket"
# Resource: { "Fn::Join" : ["", ["arn:aws:s3:::", { "Ref" : "ServerlessDeploymentBucket" } ] ] }
# - Effect: "Allow"
# Action:
# - "s3:PutObject"
# Resource:
# Fn::Join:
# - ""
# - - "arn:aws:s3:::"
# - "Ref" : "ServerlessDeploymentBucket"
# - "/*"
# you can define service wide environment variables here
# environment:
# variable1: value1
# you can add packaging information here
#package:
# include:
# - include-me.py
# - include-me-dir/**
# exclude:
# - exclude-me.py
# - exclude-me-dir/**
functions:
hello:
handler: handler.hello
# The following are a few example events you can configure
# NOTE: Please make sure to change your handler code to work with those events
# Check the event documentation for details
events:
- http:
path: users/create
method: get
# - s3: ${env:BUCKET}
# - schedule: rate(10 minutes)
# - sns: greeter-topic
# - stream: arn:aws:dynamodb:region:XXXXXX:table/foo/stream/1970-01-01T00:00:00.000
# - alexaSkill
# - alexaSmartHome: amzn1.ask.skill.xx-xx-xx-xx
# - iot:
# sql: "SELECT * FROM 'some_topic'"
# - cloudwatchEvent:
# event:
# source:
# - "aws.ec2"
# detail-type:
# - "EC2 Instance State-change Notification"
# detail:
# state:
# - pending
# - cloudwatchLog: '/aws/lambda/hello'
# - cognitoUserPool:
# pool: MyUserPool
# trigger: PreSignUp
# Define function environment variables here
# environment:
# variable2: value2
# you can add CloudFormation resource templates here
#resources:
# Resources:
# NewResource:
# Type: AWS::S3::Bucket
# Properties:
# BucketName: my-new-bucket
# Outputs:
# NewOutput:
# Description: "Description for the output"
# Value: "Some output value"
在您的 serverless.yml
中,您没有授予 Lambda 函数任何访问 S3 的权限。您模板中的示例已被注释掉。
Lambda 函数使用 IAM 角色获得访问 AWS 资源的权限。在 Amazon 管理控制台中,select 您的 Lambda 函数。向下滚动并查找 执行角色 。这将向您显示为您的函数创建的模板。
管理权限:使用 IAM 角色(执行角色)
每个 Lambda 函数都有一个与之关联的 IAM 角色(执行角色)。您在创建 Lambda 函数时指定 IAM 角色。您授予此角色的权限决定了 AWS Lambda 在代入该角色时可以执行的操作。您授予 IAM 角色的权限有两种类型:
如果您的 Lambda 函数代码访问其他 AWS 资源,例如
从 S3 存储桶读取对象或将日志写入 CloudWatch Logs,
您需要授予相关 Amazon S3 和 CloudWatch 的权限
对角色的操作。
如果事件源是基于流的(Amazon Kinesis Streams 和
DynamoDB 流),AWS Lambda 代表您轮询这些流。 AWS
Lambda 需要轮询流和读取新记录的权限
流,因此您需要对此授予相关权限
角色.
我已经拥有权限,但添加以下 资源 为我解决了问题:
Resources:
S3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.bucketName}
S3BucketPermissions:
Type: AWS::S3::BucketPolicy
DependsOn: S3Bucket
Properties:
Bucket: ${self:custom.bucketName}
PolicyDocument:
Statement:
- Principal: "*"
Action:
- s3:PutObject
- s3:PutObjectAcl
Effect: Allow
Sid: "AddPerm"
Resource: arn:aws:s3:::${self:custom.bucketName}/*
我正在使用 Amazon AWS 构建无服务器应用程序。我现在正在测试 boto3 以从我的 aws s3 服务获取存储桶列表。尽管我的 IAM 用户具有 AdministratorAccess 访问权限,但每次我尝试调用我的 lambda 函数时,它都会显示一条错误消息。有人可以帮我吗?感谢您的关注。这是我的错误信息
{
"stackTrace": [
[
"/var/task/handler.py",
9,
"hello",
"for bucket in s3.buckets.all():"
],
[
"/var/runtime/boto3/resources/collection.py",
83,
"__iter__",
"for page in self.pages():"
],
[
"/var/runtime/boto3/resources/collection.py",
161,
"pages",
"pages = [getattr(client, self._py_operation_name)(**params)]"
],
[
"/var/runtime/botocore/client.py",
312,
"_api_call",
"return self._make_api_call(operation_name, kwargs)"
],
[
"/var/runtime/botocore/client.py",
605,
"_make_api_call",
"raise error_class(parsed_response, operation_name)"
]
],
"errorType": "ClientError",
"errorMessage": "An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied"
}
这是我的 lambda 函数,handler.py
import json
import boto3
def hello(event, context):
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
body = {
"message": "gg"
}
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
这是我的 serverless.yml 文件
# Welcome to Serverless!
#
# This file is the main config file for your service.
# It's very minimal at this point and uses default values.
# You can always add more config options for more control.
# We've included some commented out config examples here.
# Just uncomment any of them to get that config option.
#
# For full config options, check the docs:
# docs.serverless.com
#
# Happy Coding!
service: serverless-boto3
# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
# frameworkVersion: "=X.X.X"
provider:
name: aws
runtime: python2.7
# you can overwrite defaults here
# stage: dev
# region: us-east-1
# you can add statements to the Lambda function's IAM Role here
# iamRoleStatements:
# - Effect: "Allow"
# Action:
# - "s3:ListBucket"
# Resource: { "Fn::Join" : ["", ["arn:aws:s3:::", { "Ref" : "ServerlessDeploymentBucket" } ] ] }
# - Effect: "Allow"
# Action:
# - "s3:PutObject"
# Resource:
# Fn::Join:
# - ""
# - - "arn:aws:s3:::"
# - "Ref" : "ServerlessDeploymentBucket"
# - "/*"
# you can define service wide environment variables here
# environment:
# variable1: value1
# you can add packaging information here
#package:
# include:
# - include-me.py
# - include-me-dir/**
# exclude:
# - exclude-me.py
# - exclude-me-dir/**
functions:
hello:
handler: handler.hello
# The following are a few example events you can configure
# NOTE: Please make sure to change your handler code to work with those events
# Check the event documentation for details
events:
- http:
path: users/create
method: get
# - s3: ${env:BUCKET}
# - schedule: rate(10 minutes)
# - sns: greeter-topic
# - stream: arn:aws:dynamodb:region:XXXXXX:table/foo/stream/1970-01-01T00:00:00.000
# - alexaSkill
# - alexaSmartHome: amzn1.ask.skill.xx-xx-xx-xx
# - iot:
# sql: "SELECT * FROM 'some_topic'"
# - cloudwatchEvent:
# event:
# source:
# - "aws.ec2"
# detail-type:
# - "EC2 Instance State-change Notification"
# detail:
# state:
# - pending
# - cloudwatchLog: '/aws/lambda/hello'
# - cognitoUserPool:
# pool: MyUserPool
# trigger: PreSignUp
# Define function environment variables here
# environment:
# variable2: value2
# you can add CloudFormation resource templates here
#resources:
# Resources:
# NewResource:
# Type: AWS::S3::Bucket
# Properties:
# BucketName: my-new-bucket
# Outputs:
# NewOutput:
# Description: "Description for the output"
# Value: "Some output value"
在您的 serverless.yml
中,您没有授予 Lambda 函数任何访问 S3 的权限。您模板中的示例已被注释掉。
Lambda 函数使用 IAM 角色获得访问 AWS 资源的权限。在 Amazon 管理控制台中,select 您的 Lambda 函数。向下滚动并查找 执行角色 。这将向您显示为您的函数创建的模板。
管理权限:使用 IAM 角色(执行角色)
每个 Lambda 函数都有一个与之关联的 IAM 角色(执行角色)。您在创建 Lambda 函数时指定 IAM 角色。您授予此角色的权限决定了 AWS Lambda 在代入该角色时可以执行的操作。您授予 IAM 角色的权限有两种类型:
如果您的 Lambda 函数代码访问其他 AWS 资源,例如 从 S3 存储桶读取对象或将日志写入 CloudWatch Logs, 您需要授予相关 Amazon S3 和 CloudWatch 的权限 对角色的操作。
如果事件源是基于流的(Amazon Kinesis Streams 和 DynamoDB 流),AWS Lambda 代表您轮询这些流。 AWS Lambda 需要轮询流和读取新记录的权限 流,因此您需要对此授予相关权限 角色.
我已经拥有权限,但添加以下 资源 为我解决了问题:
Resources:
S3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.bucketName}
S3BucketPermissions:
Type: AWS::S3::BucketPolicy
DependsOn: S3Bucket
Properties:
Bucket: ${self:custom.bucketName}
PolicyDocument:
Statement:
- Principal: "*"
Action:
- s3:PutObject
- s3:PutObjectAcl
Effect: Allow
Sid: "AddPerm"
Resource: arn:aws:s3:::${self:custom.bucketName}/*