s3cmd 在 Amazon S3 端混淆文件名(更改为随机值)(本地原始文件名)
s3cmd obfuscate file names (change to random value) on Amazon S3 side (local original file name)
my .s3cfg
使用 GPG 加密密码和其他安全设置。您会推荐其他安全强化措施吗?
[default]
access_key = $USERNAME
access_token =
add_encoding_exts =
add_headers =
bucket_location = eu-central-1
ca_certs_file =
cache_file =
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/local/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = $PASSPHRASE
guess_mime_type = True
host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key =
limitrate = 0
list_md5 = False
log_target_prefix =
long_listing = False
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
secret_key = $PASSWORD
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
stats = False
stop_on_error = False
storage_class =
urlencoding_mode = normal
use_https = True
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html
我使用此命令 upload/sync 我的本地文件夹到 Amazon S3。
s3cmd -e -v put --recursive --dry-run /Users/$USERNAME/Downloads/ s3://dgtrtrtgth777
INFO: Compiling list of local files...
INFO: Running stat() and reading/calculating MD5 values on 15957 files, this may take some time...
INFO: [1000/15957]
INFO: [2000/15957]
INFO: [3000/15957]
INFO: [4000/15957]
INFO: [5000/15957]
INFO: [6000/15957]
INFO: [7000/15957]
INFO: [8000/15957]
INFO: [9000/15957]
INFO: [10000/15957]
INFO: [11000/15957]
INFO: [12000/15957]
INFO: [13000/15957]
INFO: [14000/15957]
INFO: [15000/15957]
我用 Transmit GUI S3 Client 测试了加密,但没有得到纯文本文件。
但是我看到了原始文件名。我希望将文件名更改为随机值,但本地有原始文件名(映射?)。我该怎么做?
如果我需要恢复文件,这样做有什么缺点?除了我的 TimeMachine 备份之外,我仅将 Amazon S3 用作备份。
如果您使用 "random" 个名称,则不会同步。
如果您在 filenames/mapping 上的唯一记录是本地的,则在本地发生故障时将无法恢复您的备份。
如果您不需要所有版本的文件,我建议您在上传之前将所有文件放入(可能加密的)压缩包中。
否则,您将不得不编写一个列出所有文件的小脚本,并单独执行 s3cmd put
指定一个随机目的地,其中映射附加到日志文件,这应该是您的第一件事s3cmd put
到您的服务器。我不建议将此用于存储备份这样重要的事情。
显示其工作原理的框架:
# Save all files in backupX.sh where X is the version number
find /Users/$USERNAME/Downloads/ | awk '{print "s3cmd -e -v put "[=10=]" s3://dgtrshitcrapola/"rand()*1000000}' > backupX.sh
# Upload the mapping file
s3cmd -e -v put backupX.sh s3://dgtrshitcrapola/
# Upload the actual files
sh backupX.sh
# Add cleanup code here
但是,您将需要处理文件名冲突、上传失败、版本冲突……为什么不使用现有的备份到 S3 的工具呢?
my .s3cfg
使用 GPG 加密密码和其他安全设置。您会推荐其他安全强化措施吗?
[default]
access_key = $USERNAME
access_token =
add_encoding_exts =
add_headers =
bucket_location = eu-central-1
ca_certs_file =
cache_file =
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/local/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = $PASSPHRASE
guess_mime_type = True
host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key =
limitrate = 0
list_md5 = False
log_target_prefix =
long_listing = False
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
secret_key = $PASSWORD
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
stats = False
stop_on_error = False
storage_class =
urlencoding_mode = normal
use_https = True
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html
我使用此命令 upload/sync 我的本地文件夹到 Amazon S3。
s3cmd -e -v put --recursive --dry-run /Users/$USERNAME/Downloads/ s3://dgtrtrtgth777
INFO: Compiling list of local files...
INFO: Running stat() and reading/calculating MD5 values on 15957 files, this may take some time...
INFO: [1000/15957]
INFO: [2000/15957]
INFO: [3000/15957]
INFO: [4000/15957]
INFO: [5000/15957]
INFO: [6000/15957]
INFO: [7000/15957]
INFO: [8000/15957]
INFO: [9000/15957]
INFO: [10000/15957]
INFO: [11000/15957]
INFO: [12000/15957]
INFO: [13000/15957]
INFO: [14000/15957]
INFO: [15000/15957]
我用 Transmit GUI S3 Client 测试了加密,但没有得到纯文本文件。
但是我看到了原始文件名。我希望将文件名更改为随机值,但本地有原始文件名(映射?)。我该怎么做?
如果我需要恢复文件,这样做有什么缺点?除了我的 TimeMachine 备份之外,我仅将 Amazon S3 用作备份。
如果您使用 "random" 个名称,则不会同步。
如果您在 filenames/mapping 上的唯一记录是本地的,则在本地发生故障时将无法恢复您的备份。
如果您不需要所有版本的文件,我建议您在上传之前将所有文件放入(可能加密的)压缩包中。
否则,您将不得不编写一个列出所有文件的小脚本,并单独执行 s3cmd put
指定一个随机目的地,其中映射附加到日志文件,这应该是您的第一件事s3cmd put
到您的服务器。我不建议将此用于存储备份这样重要的事情。
显示其工作原理的框架:
# Save all files in backupX.sh where X is the version number
find /Users/$USERNAME/Downloads/ | awk '{print "s3cmd -e -v put "[=10=]" s3://dgtrshitcrapola/"rand()*1000000}' > backupX.sh
# Upload the mapping file
s3cmd -e -v put backupX.sh s3://dgtrshitcrapola/
# Upload the actual files
sh backupX.sh
# Add cleanup code here
但是,您将需要处理文件名冲突、上传失败、版本冲突……为什么不使用现有的备份到 S3 的工具呢?