是否可以在不读取或复制文件的情况下将文件上传到 S3?
Is it possible to upload a file to S3 without reading or duplicating it?
我正在尝试将文件上传到 S3,但文件太大,我们需要经常上传。所以我在寻找是否有任何选项可以使用 nodejs 将文件上传到 S3,而无需完全读取文件的内容。
下面的代码工作正常,但每次我要上传时它都在读取文件。
const aws = require("aws-sdk");
aws.config.update({
secretAccessKey: process.env.ACCESS_SECRET,
accessKeyId: process.env.ACCESS_KEY,
region: process.env.REGION,
});
const BUCKET = process.env.BUCKET;
const s3 = new aws.S3();
const fileName = "logs.txt";
const uploadFile = () => {
fs.readFile(fileName, (err, data) => {
if (err) throw err;
const params = {
Bucket: BUCKET, // pass your bucket name
Key: fileName, // file will be saved as testBucket/contacts.csv
Body: JSON.stringify(data, null, 2),
};
s3.upload(params, function (s3Err, data) {
if (s3Err) throw s3Err;
console.log(`File uploaded successfully at ${data.Location}`);
});
});
};
uploadFile();
您可以使用流。
首先为要上传的文件创建一个readStream。然后,您可以将其作为 Body 传递给 aws s3。
import { createReadStream } from 'fs';
const inputStream = createReadStream('sample.txt');
s3
.upload({ Key: fileName, Body: inputStream, Bucket: BUCKET })
.promise()
.then(console.log, console.error)
您可以使用分段上传:
AWS 文章:
https://aws.amazon.com/blogs/aws/amazon-s3-multipart-upload/
SO 关于 python 的相同问题:Can I stream a file upload to S3 without a content-length header?
JS API参考手册:https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3/ManagedUpload.html
基本示例是:
var upload = new AWS.S3.ManagedUpload({
params: {Bucket: 'bucket', Key: 'key', Body: stream}
});
所以你必须提供一个流作为输入。
const readableStream = fs.createReadStream(filePath);
JS api 记录在此处:https://nodejs.org/api/fs.html#fscreatereadstreampath-options
当然,你可以边读数据边处理,然后传给S3API,你只需要实现StreamAPI.
我正在尝试将文件上传到 S3,但文件太大,我们需要经常上传。所以我在寻找是否有任何选项可以使用 nodejs 将文件上传到 S3,而无需完全读取文件的内容。 下面的代码工作正常,但每次我要上传时它都在读取文件。
const aws = require("aws-sdk");
aws.config.update({
secretAccessKey: process.env.ACCESS_SECRET,
accessKeyId: process.env.ACCESS_KEY,
region: process.env.REGION,
});
const BUCKET = process.env.BUCKET;
const s3 = new aws.S3();
const fileName = "logs.txt";
const uploadFile = () => {
fs.readFile(fileName, (err, data) => {
if (err) throw err;
const params = {
Bucket: BUCKET, // pass your bucket name
Key: fileName, // file will be saved as testBucket/contacts.csv
Body: JSON.stringify(data, null, 2),
};
s3.upload(params, function (s3Err, data) {
if (s3Err) throw s3Err;
console.log(`File uploaded successfully at ${data.Location}`);
});
});
};
uploadFile();
您可以使用流。
首先为要上传的文件创建一个readStream。然后,您可以将其作为 Body 传递给 aws s3。
import { createReadStream } from 'fs';
const inputStream = createReadStream('sample.txt');
s3
.upload({ Key: fileName, Body: inputStream, Bucket: BUCKET })
.promise()
.then(console.log, console.error)
您可以使用分段上传:
AWS 文章: https://aws.amazon.com/blogs/aws/amazon-s3-multipart-upload/
SO 关于 python 的相同问题:Can I stream a file upload to S3 without a content-length header?
JS API参考手册:https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3/ManagedUpload.html
基本示例是:
var upload = new AWS.S3.ManagedUpload({
params: {Bucket: 'bucket', Key: 'key', Body: stream}
});
所以你必须提供一个流作为输入。
const readableStream = fs.createReadStream(filePath);
JS api 记录在此处:https://nodejs.org/api/fs.html#fscreatereadstreampath-options
当然,你可以边读数据边处理,然后传给S3API,你只需要实现StreamAPI.