Laravel 将大型备份上传到 s3 时发生备份错误
Laravel backup error uploading large backup to s3
我有一个 Laravel 项目,每天使用 spatie/laravel-backup 创建一个新备份并将其上传到 s3。配置妥当,已经运行一年多没问题
突然,由于以下错误,备份无法完成上传过程:
Copying zip failed because: An exception occurred while uploading parts to a multipart upload. The following parts had errors:
- Part 17: Error executing "UploadPart" on "https://s3.eu-west-1.amazonaws.com/my.bucket/Backups/2019-04-01-09-47-33.zip?partNumber=17&uploadId=uploadId"; AWS HTTP error: cURL error 55: SSL_write() returned SYSCALL, errno = 104 (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) (server): 100 Continue -
- Part 16: Error executing "UploadPart" on "https://s3.eu-west-1.amazonaws.com/my.bucket/Backups/2019-04-01-09-47-33.zip?partNumber=16&uploadId=uploadId"; AWS HTTP error: Client error: `PUT https://s3.eu-west-1.amazonaws.com/my.bucket/Backups/2019-04-01-09-47-33.zip?partNumber=16&uploadId=uploadId` resulted in a `400 Bad Request` response:
<?xml version="1.0" encoding="UTF-8"?>
<Code>RequestTimeout</Code><Message>Your socket connection to the server w (truncated...)
RequestTimeout (client): Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. - <?xml version="1.0" encoding="UTF-8"?>
<Code>RequestTimeout</Code>
<Message>Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.</Message>
<RequestId>RequestId..</RequestId>
<HostId>Host id..</HostId>
我试过了运行:
php artisan backup:run --only-db // 110MB zip file
php artisan backup:run --only-files // 34MB zip file
而且它们都能正常工作。我的猜测是错误是由完整的 zip 大小(大约 145MB)引起的,这可以解释为什么它以前从未发生过(当备份大小较小时)。 laravel-backup包有一个相关的issue,不过我觉得不是库的问题,它只是使用底层s3 flysystem接口上传zip。
我应该在 php.ini
上设置一些参数(例如增加 curl 上传文件的大小),还是一个系统来将文件分成多个块?
您可以尝试在S3Client (https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/guide_configuration.html)中添加timeout
参数
像这样:
$s3 = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-west-2',
'credentials' => $credentials,
'http' => [
'timeout' => 360
]
]);
但是在 Laravel 中,你应该在 config/filesystems.php
中这样做:
'disks' => [
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => 'us-east-1',
'bucket' => env('FILESYSTEM_S3_BUCKET'),
'http' => [
'timeout' => 360
]
]
]
我有一个 Laravel 项目,每天使用 spatie/laravel-backup 创建一个新备份并将其上传到 s3。配置妥当,已经运行一年多没问题
突然,由于以下错误,备份无法完成上传过程:
Copying zip failed because: An exception occurred while uploading parts to a multipart upload. The following parts had errors:
- Part 17: Error executing "UploadPart" on "https://s3.eu-west-1.amazonaws.com/my.bucket/Backups/2019-04-01-09-47-33.zip?partNumber=17&uploadId=uploadId"; AWS HTTP error: cURL error 55: SSL_write() returned SYSCALL, errno = 104 (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) (server): 100 Continue -
- Part 16: Error executing "UploadPart" on "https://s3.eu-west-1.amazonaws.com/my.bucket/Backups/2019-04-01-09-47-33.zip?partNumber=16&uploadId=uploadId"; AWS HTTP error: Client error: `PUT https://s3.eu-west-1.amazonaws.com/my.bucket/Backups/2019-04-01-09-47-33.zip?partNumber=16&uploadId=uploadId` resulted in a `400 Bad Request` response:
<?xml version="1.0" encoding="UTF-8"?>
<Code>RequestTimeout</Code><Message>Your socket connection to the server w (truncated...)
RequestTimeout (client): Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. - <?xml version="1.0" encoding="UTF-8"?>
<Code>RequestTimeout</Code>
<Message>Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.</Message>
<RequestId>RequestId..</RequestId>
<HostId>Host id..</HostId>
我试过了运行:
php artisan backup:run --only-db // 110MB zip file
php artisan backup:run --only-files // 34MB zip file
而且它们都能正常工作。我的猜测是错误是由完整的 zip 大小(大约 145MB)引起的,这可以解释为什么它以前从未发生过(当备份大小较小时)。 laravel-backup包有一个相关的issue,不过我觉得不是库的问题,它只是使用底层s3 flysystem接口上传zip。
我应该在 php.ini
上设置一些参数(例如增加 curl 上传文件的大小),还是一个系统来将文件分成多个块?
您可以尝试在S3Client (https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/guide_configuration.html)中添加timeout
参数
像这样:
$s3 = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-west-2',
'credentials' => $credentials,
'http' => [
'timeout' => 360
]
]);
但是在 Laravel 中,你应该在 config/filesystems.php
中这样做:
'disks' => [
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => 'us-east-1',
'bucket' => env('FILESYSTEM_S3_BUCKET'),
'http' => [
'timeout' => 360
]
]
]