Serverless 为 S3 触发器部署第二个函数
Serverless deploys second function for S3 trigger
我已经创建了 AWS Lambda 函数来 运行 当 S3 中的特定路径创建新文件时,它工作得很好。
service: redshift
frameworkVersion: '2'
custom:
bucket: extapp
path_prefix: 'xyz'
database: ABC
schema: xyz_dbo
table_prefix: shipmentlog
user: admin
password: "#$%^&*(*&^%$%"
port: 5439
endpoint: "*********.redshift.amazonaws.com"
role: "arn:aws:iam::*****:role/RedshiftFileTransfer"
provider:
name: aws
runtime: python3.8
stage: prod
region: us-west-2
stackName: redshift-prod-copy
stackTags:
Service: "it"
lambdaHashingVersion: 20201221
memorySize: 128
timeout: 900
logRetentionInDays: 14
environment:
S3_BUCKET: ${self:custom.bucket}
S3_BUCKET_PATH_PREFIX: ${self:custom.path_prefix}
REDSHIFT_DATABASE: ${self:custom.database}
REDSHIFT_SCHEMA: ${self:custom.schema}
REDSHIFT_TABEL_PREFIX: ${self:custom.table_prefix}
REDSHIFT_USER: ${self:custom.user}
REDSHIFT_PASSWORD: ${self:custom.password}
REDSHIFT_PORT: ${self:custom.port}
REDSHIFT_ENDPOINT: ${self:custom.endpoint}
REDSHIFT_ROLE: ${self:custom.role}
iam:
role:
name: s3-to-redshift-copy
statements:
- Effect: Allow
Action:
- s3:GetObject
Resource: "arn:aws:s3:::${self:custom.bucket}/*"
functions:
copy:
handler: handler.run
events:
- s3:
bucket: ${self:custom.bucket}
event: s3:ObjectCreated:*
rules:
- prefix: ${self:custom.path_prefix}/
- suffix: .json
existing: true
package:
exclude:
- node_modules/**
- package*.json
- README.md
plugins:
- serverless-python-requirements
但是当我部署这个函数时,还部署了另一个名为 redshift-prod-custom-resource-existing-s3
的函数,即 Node.js
函数。我想了解为什么在特定路径的 S3 存储桶中创建新文件时触发主 lambda 函数所必需的第二个函数。
是serverless Framework通过Custom Resource
添加触发器调用lambda到S3 bucket的方法
我已经创建了 AWS Lambda 函数来 运行 当 S3 中的特定路径创建新文件时,它工作得很好。
service: redshift
frameworkVersion: '2'
custom:
bucket: extapp
path_prefix: 'xyz'
database: ABC
schema: xyz_dbo
table_prefix: shipmentlog
user: admin
password: "#$%^&*(*&^%$%"
port: 5439
endpoint: "*********.redshift.amazonaws.com"
role: "arn:aws:iam::*****:role/RedshiftFileTransfer"
provider:
name: aws
runtime: python3.8
stage: prod
region: us-west-2
stackName: redshift-prod-copy
stackTags:
Service: "it"
lambdaHashingVersion: 20201221
memorySize: 128
timeout: 900
logRetentionInDays: 14
environment:
S3_BUCKET: ${self:custom.bucket}
S3_BUCKET_PATH_PREFIX: ${self:custom.path_prefix}
REDSHIFT_DATABASE: ${self:custom.database}
REDSHIFT_SCHEMA: ${self:custom.schema}
REDSHIFT_TABEL_PREFIX: ${self:custom.table_prefix}
REDSHIFT_USER: ${self:custom.user}
REDSHIFT_PASSWORD: ${self:custom.password}
REDSHIFT_PORT: ${self:custom.port}
REDSHIFT_ENDPOINT: ${self:custom.endpoint}
REDSHIFT_ROLE: ${self:custom.role}
iam:
role:
name: s3-to-redshift-copy
statements:
- Effect: Allow
Action:
- s3:GetObject
Resource: "arn:aws:s3:::${self:custom.bucket}/*"
functions:
copy:
handler: handler.run
events:
- s3:
bucket: ${self:custom.bucket}
event: s3:ObjectCreated:*
rules:
- prefix: ${self:custom.path_prefix}/
- suffix: .json
existing: true
package:
exclude:
- node_modules/**
- package*.json
- README.md
plugins:
- serverless-python-requirements
但是当我部署这个函数时,还部署了另一个名为 redshift-prod-custom-resource-existing-s3
的函数,即 Node.js
函数。我想了解为什么在特定路径的 S3 存储桶中创建新文件时触发主 lambda 函数所必需的第二个函数。
是serverless Framework通过Custom Resource
添加触发器调用lambda到S3 bucket的方法