Pulumi(TypeScript,AWS):如何将多个文件上传到 S3,包括。静态网站托管目录中的嵌套文件

Pulumi (TypeScript, AWS): How to upload multiple files to S3 incl. nested files in directories for static website hosting

Create an AWS S3 Website in Under 5 Minutes YT video and Host a Static Website on Amazon S3 Pulumi tutorial 中有很好的解释如何使用 Pulumi 在 S3 上创建网站托管。

在示例代码中使用了 Pulumi 的 Bucket and BucketObject。第一个创建 S3 Bucket,后者创建对象,这些对象主要是 index.html 用于 public 访问,如下所示:

const aws = require("@pulumi/aws");
const pulumi = require("@pulumi/pulumi");
const mime = require("mime");

// Create an S3 bucket
let siteBucket = new aws.s3.Bucket("s3-website-bucket");

let siteDir = "www"; // directory for content files

// For each file in the directory, create an S3 object stored in `siteBucket`
for (let item of require("fs").readdirSync(siteDir)) {
    let filePath = require("path").join(siteDir, item);
    let object = new aws.s3.BucketObject(item, {
      bucket: siteBucket,
      source: new pulumi.asset.FileAsset(filePath),     // use FileAsset to point to a file
      contentType: mime.getType(filePath) || undefined, // set the MIME type of the file
    });
}

exports.bucketName = siteBucket.bucket; // create a stack export for bucket name

现在使用基于 Vue.js / Nuxt.js 的应用程序,我需要上传多个生成的文件,这些文件位于我项目根目录的 dist 目录中。它们由 npm run build 生成并生成以下文件:

$ find dist
dist
dist/favicon.ico
dist/index.html
dist/.nojekyll
dist/200.html
dist/_nuxt
dist/_nuxt/LICENSES
dist/_nuxt/static
dist/_nuxt/static/1619685747
dist/_nuxt/static/1619685747/manifest.js
dist/_nuxt/static/1619685747/payload.js
dist/_nuxt/f3a11f3.js
dist/_nuxt/f179782.js
dist/_nuxt/fonts
dist/_nuxt/fonts/element-icons.4520188.ttf
dist/_nuxt/fonts/element-icons.313f7da.woff
dist/_nuxt/c25b1a7.js
dist/_nuxt/84fe6d0.js
dist/_nuxt/a93ae32.js
dist/_nuxt/7b77d06.js

我的问题是这些文件还包含嵌套在子目录中的文件,子目录本身也可以是子目录 - 例如dist/_nuxt/fonts/element-icons.4520188.ttf。教程中提供的方法不会评估子目录,我不知道如何使用 Pulumi/TypeScript.

我继续使用该方法并尝试构建一个递归 TypeScript 函数,该函数根据教程的建议创建基于 Pulumi 的 BucketObject 的文件或目录。 这让我陷入了困境!我需要使用 BucketObject 创建目录,使用 key 参数 () 中附加的 "/" 可以实现。仅作记录,该功能如下所示:

function createS3BucketFolder(dirName: string) {
  new aws.s3.BucketObject(dirName, {
    bucket: nuxtBucket,
    acl: "public-read",
    key: dirName + "/", // an appended '/' will create a S3 Bucket prefix (see 
    contentType: "application/x-directory" // this content type is also needed for the S3 Bucket prefix
    // no source needed here!
  })
}

但这只是使用 TypeScript 递归遍历目录所需的大量代码中的一部分(另请参阅 this huge amount of so answers on the topic, coming from synchronous versions to crazy Node.js 11+ async solutions). I ended up with around 40-50 lines of code only for the recursive adding of the static site generated files to S3 - and it didn't feel good, to not have a test for that (and I don't really get, why Pulumi doesn't support that use case somehow like Terraform)。

最后我偶然发现了关于 Secure Static Website Using Amazon S3, CloudFront, Route53, and Certificate Manager where there's a special paragraph about deployment speed 的 Pulumi 教程,其中有一段有趣的话:

This example creates a aws.S3.BucketObject for every file served from the website. When deploying large websites, that can lead to very long updates as every individual file is checked for any changes. It may be more efficient to not manage individual files using Pulumi and and instead just use the AWS CLI to sync local files with the S3 bucket directly.

TLDR:对于非 hello world 用例,Pulumi 文档告诉我们不要使用 Pulumi 将文件上传到 S3,而是使用 AWS CLI!所以我修改了我的代码,只使用 Pulumi 创建 S3 存储桶,如下所示:

import * as aws from "@pulumi/aws";

// Create an AWS resource (S3 Bucket)
const nuxtBucket = new aws.s3.Bucket("microservice-ui-nuxt-js-hosting-bucket", {
  acl: "public-read",
  website: {
    indexDocument: "index.html",
  }
});

// Export the name of the bucket
export const bucketName = nuxtBucket.id;

这将创建一个支持静态站点托管的 S3 存储桶,并通过 Pulumi public-read 进行访问。现在使用 AWS CLI,我们可以使用以下命令 copy/sync 我们的文件优雅地添加到存储桶中:

aws s3 sync ../dist/ s3://$(pulumi stack output bucketName) --acl public-read

使用$(pulumi stack output bucketName),我们简单地获取由 Pulumi 创建的 S3 Bucket 名称。请注意末尾的 --acl public-read 参数,因为您必须对 S3 中的每个静态 Web 文件启用 public 读取访问权限,尽管 Bucket 本身已经具有 public 读取访问权限!