Spring Webflux 和 Amazon SDK 2.x:S3AsyncClient 超时
Spring Webflux and Amazon SDK 2.x: S3AsyncClient timeout
我正在使用 Spring boot 2.3.1、Webflux、Spring 具有反应性 mongodb 驱动程序和 Amazon SDK 2.14.6 的数据来实施反应性项目。
我有一个 CRUD,它在 MongoDB 上保留一个实体并且必须将文件上传到 S3。我正在使用 SDK 响应式方法 s3AsyncClient.putObject
,但遇到了一些问题。 CompletableFuture 抛出以下异常:
java.util.concurrent.CompletionException: software.amazon.awssdk.core.exception.ApiCallTimeoutException: Client execution did not complete before the specified timeout configuration: 60000 millis
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314) ~[na:na]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Assembly trace from producer [reactor.core.publisher.MonoMapFuseable] :
reactor.core.publisher.Mono.map(Mono.java:3054)
br.com.wareline.waredrive.service.S3Service.uploadFile(S3Service.java:94)
我尝试上传的文件大约有34kb,是一个简单的文本文件。
上传方法在我的S3Service.java
class中,自动连接到DocumentoService.java
@Component
public class S3Service {
@Autowired
private final ConfiguracaoService configuracaoService;
public Mono<PutObjectResponse> uploadFile(final HttpHeaders headers, final Flux<ByteBuffer> body, final String fileKey, final String cliente) {
return configuracaoService.findByClienteId(cliente)
.switchIfEmpty(Mono.error(new ResponseStatusException(HttpStatus.NOT_FOUND, String.format("Configuração com id %s não encontrada", cliente))))
.map(configuracao -> uploadFileToS3(headers, body, fileKey, configuracao))
.doOnSuccess(response -> {
checkResult(response);
});
}
private PutObjectResponse uploadFileToS3(final HttpHeaders headers, final Flux<ByteBuffer> body, final String fileKey, final Configuracao configuracao) {
final long length = headers.getContentLength();
if (length < 0) {
throw new UploadFailedException(HttpStatus.BAD_REQUEST.value(), Optional.of("required header missing: Content-Length"));
}
final Map<String, String> metadata = new HashMap<>();
final MediaType mediaType = headers.getContentType() != null ? headers.getContentType() : MediaType.APPLICATION_OCTET_STREAM;
final S3AsyncClient s3AsyncClient = getS3AsyncClient(configuracao);
return s3AsyncClient.putObject(
PutObjectRequest.builder()
.bucket(configuracao.getBucket())
.contentLength(length)
.key(fileKey)
.contentType(mediaType)
.metadata(metadata)
.build(),
AsyncRequestBody.fromPublisher(body))
.whenComplete((resp, err) -> s3AsyncClient.close())
.join();
}
public S3AsyncClient getS3AsyncClient(final Configuracao s3Props) {
final SdkAsyncHttpClient httpClient = NettyNioAsyncHttpClient.builder()
.readTimeout(Duration.ofMinutes(1))
.writeTimeout(Duration.ofMinutes(1))
.connectionTimeout(Duration.ofMinutes(1))
.maxConcurrency(64)
.build();
final S3Configuration serviceConfiguration = S3Configuration.builder().checksumValidationEnabled(false).chunkedEncodingEnabled(true).build();
return S3AsyncClient.builder()
.httpClient(httpClient)
.region(Region.of(s3Props.getRegion()))
.credentialsProvider(() -> AwsBasicCredentials.create(s3Props.getAccessKey(), s3Props.getSecretKey()))
.serviceConfiguration(serviceConfiguration)
.overrideConfiguration(builder -> builder.apiCallTimeout(Duration.ofMinutes(1)).apiCallAttemptTimeout(Duration.ofMinutes(1)))
.build();
}
我的实施基于 Amazon SDK 文档和 https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/s3/src/main/java/com/example/s3/S3AsyncOps.java
中的代码示例
我想不通异步客户端超时问题的原因是什么。奇怪的是,当我使用相同的 S3AsyncClient 从存储桶下载文件时,它起作用了。我试图将 S3AsyncClient 中的超时时间增加到大约 5 分钟,但没有成功。我不知道我做错了什么。
我发现了错误。
当我在 PutObjectRequest.builder().contentLength(length)
中定义 contentLength 时,我使用 headers.getContentLength()
这是整个请求的大小。在我的请求中,其他信息一起传递,使得内容长度大于实际文件长度。
我在亚马逊文档中找到了这个:
The number of bytes set in the "Content-Length" header is more than
the actual file size
When you send an HTTP request to Amazon S3, Amazon S3 expects to
receive the amount of data specified in the Content-Length header. If
the expected amount of data isn't received by Amazon S3, and the
connection is idle for 20 seconds or longer, then the connection is
closed. Be sure to verify that the actual file size that you're
sending to Amazon S3 aligns with the file size that is specified in
the Content-Length header.
https://aws.amazon.com/pt/premiumsupport/knowledge-center/s3-socket-connection-timeout-error/
发生超时错误是因为S3等待发送的内容长度达到客户端通知的大小,文件在达到通知的内容长度之前结束传输。然后连接保持空闲状态,S3 关闭套接字。
我把内容长度改成实际文件大小,上传成功
我正在使用 Spring boot 2.3.1、Webflux、Spring 具有反应性 mongodb 驱动程序和 Amazon SDK 2.14.6 的数据来实施反应性项目。
我有一个 CRUD,它在 MongoDB 上保留一个实体并且必须将文件上传到 S3。我正在使用 SDK 响应式方法 s3AsyncClient.putObject
,但遇到了一些问题。 CompletableFuture 抛出以下异常:
java.util.concurrent.CompletionException: software.amazon.awssdk.core.exception.ApiCallTimeoutException: Client execution did not complete before the specified timeout configuration: 60000 millis
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314) ~[na:na]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Assembly trace from producer [reactor.core.publisher.MonoMapFuseable] :
reactor.core.publisher.Mono.map(Mono.java:3054)
br.com.wareline.waredrive.service.S3Service.uploadFile(S3Service.java:94)
我尝试上传的文件大约有34kb,是一个简单的文本文件。
上传方法在我的S3Service.java
class中,自动连接到DocumentoService.java
@Component
public class S3Service {
@Autowired
private final ConfiguracaoService configuracaoService;
public Mono<PutObjectResponse> uploadFile(final HttpHeaders headers, final Flux<ByteBuffer> body, final String fileKey, final String cliente) {
return configuracaoService.findByClienteId(cliente)
.switchIfEmpty(Mono.error(new ResponseStatusException(HttpStatus.NOT_FOUND, String.format("Configuração com id %s não encontrada", cliente))))
.map(configuracao -> uploadFileToS3(headers, body, fileKey, configuracao))
.doOnSuccess(response -> {
checkResult(response);
});
}
private PutObjectResponse uploadFileToS3(final HttpHeaders headers, final Flux<ByteBuffer> body, final String fileKey, final Configuracao configuracao) {
final long length = headers.getContentLength();
if (length < 0) {
throw new UploadFailedException(HttpStatus.BAD_REQUEST.value(), Optional.of("required header missing: Content-Length"));
}
final Map<String, String> metadata = new HashMap<>();
final MediaType mediaType = headers.getContentType() != null ? headers.getContentType() : MediaType.APPLICATION_OCTET_STREAM;
final S3AsyncClient s3AsyncClient = getS3AsyncClient(configuracao);
return s3AsyncClient.putObject(
PutObjectRequest.builder()
.bucket(configuracao.getBucket())
.contentLength(length)
.key(fileKey)
.contentType(mediaType)
.metadata(metadata)
.build(),
AsyncRequestBody.fromPublisher(body))
.whenComplete((resp, err) -> s3AsyncClient.close())
.join();
}
public S3AsyncClient getS3AsyncClient(final Configuracao s3Props) {
final SdkAsyncHttpClient httpClient = NettyNioAsyncHttpClient.builder()
.readTimeout(Duration.ofMinutes(1))
.writeTimeout(Duration.ofMinutes(1))
.connectionTimeout(Duration.ofMinutes(1))
.maxConcurrency(64)
.build();
final S3Configuration serviceConfiguration = S3Configuration.builder().checksumValidationEnabled(false).chunkedEncodingEnabled(true).build();
return S3AsyncClient.builder()
.httpClient(httpClient)
.region(Region.of(s3Props.getRegion()))
.credentialsProvider(() -> AwsBasicCredentials.create(s3Props.getAccessKey(), s3Props.getSecretKey()))
.serviceConfiguration(serviceConfiguration)
.overrideConfiguration(builder -> builder.apiCallTimeout(Duration.ofMinutes(1)).apiCallAttemptTimeout(Duration.ofMinutes(1)))
.build();
}
我的实施基于 Amazon SDK 文档和 https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/s3/src/main/java/com/example/s3/S3AsyncOps.java
中的代码示例我想不通异步客户端超时问题的原因是什么。奇怪的是,当我使用相同的 S3AsyncClient 从存储桶下载文件时,它起作用了。我试图将 S3AsyncClient 中的超时时间增加到大约 5 分钟,但没有成功。我不知道我做错了什么。
我发现了错误。
当我在 PutObjectRequest.builder().contentLength(length)
中定义 contentLength 时,我使用 headers.getContentLength()
这是整个请求的大小。在我的请求中,其他信息一起传递,使得内容长度大于实际文件长度。
我在亚马逊文档中找到了这个:
The number of bytes set in the "Content-Length" header is more than the actual file size
When you send an HTTP request to Amazon S3, Amazon S3 expects to receive the amount of data specified in the Content-Length header. If the expected amount of data isn't received by Amazon S3, and the connection is idle for 20 seconds or longer, then the connection is closed. Be sure to verify that the actual file size that you're sending to Amazon S3 aligns with the file size that is specified in the Content-Length header.
https://aws.amazon.com/pt/premiumsupport/knowledge-center/s3-socket-connection-timeout-error/
发生超时错误是因为S3等待发送的内容长度达到客户端通知的大小,文件在达到通知的内容长度之前结束传输。然后连接保持空闲状态,S3 关闭套接字。
我把内容长度改成实际文件大小,上传成功