错误 AccessDenied:在 Request.extractError 处拒绝访问(/var/task/node_modules/aws-sdk/lib/services/s3.js

ERROR AccessDenied: Access Denied at Request.extractError (/var/task/node_modules/aws-sdk/lib/services/s3.js

我使用 nodejs s3 包“aws-sdk” 当我在 mac 上使用无服务器离线 运行 时它工作正常。 s3.getSignedUrl 和 s3.listObjects 函数都可以正常工作。

但是当我 运行 我部署的应用程序时,s3.getSignedUrl 工作正常但 s3.listObjects 不行。我在 CloudWatch 中收到此错误:

在 CloudWatch > 日志组 > /aws/lambda/mamahealth-api-stage-userFilesIndex:

2021-12-24T02:49:50.965Z    421e054e-d1bf-429a-b73c-402ad21c7bae    ERROR   AccessDenied: Access Denied
    at Request.extractError (/var/task/node_modules/aws-sdk/lib/services/s3.js:714:35)
    at Request.callListeners (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    at Request.emit (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/var/task/node_modules/aws-sdk/lib/request.js:688:14)
    at Request.transition (/var/task/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/var/task/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /var/task/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/var/task/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/var/task/node_modules/aws-sdk/lib/request.js:690:12)
    at Request.callListeners (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:116:18) {
  code: 'AccessDenied',
  region: 'ap-northeast-1',
  time: 2021-12-24T02:49:50.960Z,
  requestId: 'Q8B79GKAHPHMH3DN',
  extendedRequestId: 'Nhx4ekCzotCSjGXGssFl0lQtyrWf01Gf8416FaqBALA07g3qm31avCIErDPcJWaJt+90xNz8w0o=',
  cfId: undefined,
  statusCode: 403,
  retryable: false,
  retryDelay: 43.44595425080651
}

看来我的aws s3有权限问题。

我的aws-sdk版本是2.995.0

我的helpers/s3.ts代码:

import stream from 'stream';
import { nanoid } from 'nanoid';
import axios from 'axios';
import AWS from 'aws-sdk';
import mime from 'mime-types';
import moment from 'moment-timezone';

AWS.config.update({
  region: 'ap-northeast-1',
});
const s3 = new AWS.S3();

export const uploadFromStream = (key: string, fileExt: string) => {
  const pass = new stream.PassThrough();
  return {
    writeStream: pass,
    promise: s3
      .upload({
        Bucket: process.env.AWS_BUCKET_NAME!,
        Key: key,
        Body: pass,
        ContentType: mime.lookup(fileExt) || undefined,
      })
      .promise(),
  };
};

type S3FileData = {
  lastModified: number;
  id: string;
  fileExt: string;
  size: number;
};

export const listObjects = async (s3Folder: string): Promise<S3FileData[]> => {
  const params = {
    Bucket: process.env.AWS_BUCKET_NAME!,
    Delimiter: '/',
    Prefix: `${s3Folder}/`,
  };
  const data = await s3.listObjects(params).promise();
  if (!data.Contents) return [];
  const fileList: S3FileData[] = [];
  for (let index = 0; index < data.Contents.length; index += 1) {
    const content = data.Contents[index];
    const { Size: size } = content;
    const splitedKey: string[] | undefined = content.Key?.split('/');
    const lastModified = moment(content.LastModified).unix();
    const fileFullName =
      (splitedKey && splitedKey[splitedKey.length - 1]) || '';
    const fileFullNameSplited = fileFullName.split('.');
    if (fileFullNameSplited.length < 2 || !size)
      throw Error('no file ext or no size');
    const fileExt = fileFullNameSplited.pop() as string;
    const id = fileFullNameSplited.join();
    fileList.push({ id, fileExt, lastModified, size });
  }
  return fileList;
};

export const uploadFileFromBuffer = async (
  key: string,
  fileExt: string,
  buffer: Buffer,
) => {
  return s3
    .upload({
      Bucket: process.env.AWS_BUCKET_NAME!,
      Key: key,
      Body: buffer,
      ContentType: mime.lookup(fileExt) || undefined,
    })
    .promise();
};

export const uploadFileFromNetwork = async (
  key: string,
  fileExt: string,
  readUrl: string,
) => {
  const { writeStream, promise } = uploadFromStream(key, fileExt);
  const response = await axios({
    method: 'get',
    url: readUrl,
    responseType: 'stream',
  });
  response.data.pipe(writeStream);
  return promise;
};

export enum S3ResourceType {
  image = 'image',
  report = 'report',
}

export const getSystemGeneratedFileS3Key = (
  resourceType: S3ResourceType,
  fileExt: string,
  id?: string,
): string => {
  return `system-generated/${resourceType}/${id || nanoid()}.${fileExt}`;
};

export const getUserUploadedFileS3Key = (
  userId: string,
  fileExt: string,
  id?: string,
) => {
  return `user-uploaded/${userId}/${id || nanoid()}.${fileExt}`;
};

export const downloadFile = async (key: string) => {
  const params: AWS.S3.GetObjectRequest = {
    Bucket: process.env.AWS_BUCKET_NAME!,
    Key: key,
  };
  const { Body } = await s3.getObject(params).promise();
  return Body;
};

export const deleteFile = (key: string) => {
  const params: AWS.S3.DeleteObjectRequest = {
    Bucket: process.env.AWS_BUCKET_NAME!,
    Key: key,
  };
  return s3.deleteObject(params).promise();
};

export enum GetSignedUrlOperation {
  getObject = 'getObject',
  putObject = 'putObject',
}

// Change this value to adjust the signed URL's expiration
const URL_EXPIRATION_SECONDS = 300;

export type GetSignedUrlOptions = {
  contentType: string;
};

/**
 * getSignedUrl
 * @param key s3 key
 * @param putOptions If provide putOptions, will return upload file url.
 * @param expirationSeconds Default expiration seconds is 300
 * @returns Signed Url
 */
export const getSignedUrl = (
  key: string,
  putOptions?: GetSignedUrlOptions,
  expirationSeconds?: number,
) => {
  const contentType = putOptions?.contentType;
  const operation = putOptions
    ? GetSignedUrlOperation.putObject
    : GetSignedUrlOperation.getObject;
  return s3.getSignedUrl(operation, {
    Bucket: process.env.AWS_BUCKET_NAME,
    Key: key,
    Expires: expirationSeconds || URL_EXPIRATION_SECONDS,
    ContentType: contentType,
  });
};

这是我的 s3 存储桶设置:

    S3MasterResourceBucket:
      Type: AWS::S3::Bucket
      Properties:
        AccelerateConfiguration:
          AccelerationStatus: Suspended
        BucketEncryption:
          ServerSideEncryptionConfiguration:
            - ServerSideEncryptionByDefault:
                SSEAlgorithm: AES256
        PublicAccessBlockConfiguration:
          BlockPublicAcls: TRUE
          BlockPublicPolicy: TRUE
          IgnorePublicAcls: TRUE
          RestrictPublicBuckets: TRUE
        VersioningConfiguration:
          Status: Enabled

我在 serverless.yaml 中的 iam 设置:

provider:
  iamRoleStatements:
    - Effect: Allow
      Action:
        - dynamodb:Query
        - dynamodb:GetItem
        - dynamodb:PutItem
        - dynamodb:UpdateItem
        - dynamodb:DeleteItem
        - xray:PutTraceSegments
        - xray:PutTelemetryRecords
        - cognito-idp:AdminAddUserToGroup
        - cognito-idp:AdminUpdateUserAttributes
        - cognito-idp:AdminInitiateAuth
        - cognito-idp:AdminGetUser
        - s3:PutObject
        - s3:GetObject
        - s3:DeleteObject
        - s3:ListBucket
        - sqs:SendMessage
      Resource:
        - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-ImagePostProcessQueueArn
        - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-CognitoUserPoolMyUserPoolArn
        - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-CognitoUserPoolMyUserPoolArn2
        - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-DynamoDBMasterTableArn
        - "Fn::Join":
            - "/"
            - - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-DynamoDBMasterTableArn
              - "index"
              - "*"
        - "Fn::Join":
            - "/"
            - - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-S3MasterResourceBucketArn
              - "*"

我在这个问题中看到了 Ermiya Eskandary 的评论: 然后我检查下面的配置:

  1. 文件存在

是的,因为当我使用 serverless-offline 时这段代码可以 return 数据,但是当我 运行 部署应用程序时会抛出 403 错误。

const data = await s3.listObjects(params).promise();
  1. 在正确的区域使用正确的密钥和存储桶名称。

是的,key 和 bucket 名称和区域都是正确的。

  1. 为具有权限的用户使用正确的访问密钥和秘密访问密钥?

在我的 mac 中,我使用了这个命令:

aws configure

然后我正确输入了我的团队帐户的访问密钥 ID 和密码。

  1. 分配给用户的角色。

角色是“AdministratorAccess”

在您的 IAM 角色的最后一行,您授予 lambda 函数在 [=15] 上执行 s3:PutObjects3:GetObjects3:DeleteObjects3:ListBucket 的权限=].

我认为前 3 个操作和最后一个操作有不同的资源要求。对于前 3 个(PutObject、GetObject 和 DeleteObject),资源名称是正确的。对于最后一个 (ListBucket),我相信它一定是末尾没有星号的桶的 arn (``S3MasterResourceBucketArn`)。

作为一种好的做法,您应该将您的策略​​拆分为多个语句,例如:

provider:
  iamRoleStatements:
    - Effect: Allow
      Action:
        - dynamodb:Query
        - dynamodb:GetItem
        - dynamodb:PutItem
        - dynamodb:UpdateItem
        - dynamodb:DeleteItem
      Resource:
        - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-DynamoDBMasterTableArn
        - "Fn::Join":
            - "/"
            - - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-DynamoDBMasterTableArn
              - "index"
              - "*"
    - Effect: Allow
      Action:
        - cognito-idp:AdminAddUserToGroup
        - cognito-idp:AdminUpdateUserAttributes
        - cognito-idp:AdminInitiateAuth
        - cognito-idp:AdminGetUser
      Resource:
        - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-CognitoUserPoolMyUserPoolArn
        - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-CognitoUserPoolMyUserPoolArn2
    - Effect: Allow
      Action:
        - sqs:SendMessage
      Resource:
        - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-ImagePostProcessQueueArn
    - Effect: Allow
      Action:
        - s3:ListBucket
      Resource:
        - Fn::ImportValue": mamahealth-api-${self:provider.stage}-S3MasterResourceBucketArn
    - Effect: Allow
      Action:
        - s3:PutObject
        - s3:GetObject
        - s3:DeleteObject
      Resource:
        - "Fn::Join":
            - "/"
            - - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-S3MasterResourceBucketArn
              - "*"
    - Effect: Allow
      Action:
        - xray:PutTraceSegments
        - xray:PutTelemetryRecords
      Resource:
        - "*"