Alpakka S3 连接问题
Alpakka S3 connection issue
我正在尝试使用 Alpakka S3 连接到一个 minio 实例以存储文件,但是我 运行 遇到了问题,因为我将库版本从 1.1.2
升级到 2.0.0
.
这是一个简单的服务 class,只有两种尝试创建存储桶的方法。我尝试了两种方法,首先从本地配置文件加载 alpakka 设置(在我的例子中是 application.conf
),其次是通过 S3Ext
.
直接创建设置
两种方法都失败了,我不确定问题所在。关于错误,好像设置没有正确加载,但我不知道我在这里做错了什么。
我在用什么:
- 玩框架2.8.1
- scala 2.13.2
- akka-stream-alpakka-s3 2.0.0
这是服务 class:
package services
import akka.actor.ActorSystem
import akka.stream.alpakka.s3._
import akka.stream.alpakka.s3.scaladsl.S3
import akka.stream.scaladsl.Sink
import akka.stream.{Attributes, Materializer}
import javax.inject.{Inject, Singleton}
import software.amazon.awssdk.auth.credentials.{AwsBasicCredentials, AwsCredentials, AwsCredentialsProvider}
import software.amazon.awssdk.regions.Region
import software.amazon.awssdk.regions.providers.AwsRegionProvider
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
@Singleton
class AlpakkaS3PlaygroundService @Inject()(
materializer: Materializer,
system: ActorSystem,
) {
def makeBucket(bucketName: String): Future[String] = {
S3.makeBucket(bucketName)(materializer) map { _ =>
"bucket created"
}
}
def makeBucket2(bucketName: String): Future[String] = {
val s3Host = "http://localhost:9000"
val s3AccessKey = "access_key"
val s3SecretKey = "secret_key"
val s3Region = "eu-central-1"
val credentialsProvider = new AwsCredentialsProvider {
override def resolveCredentials(): AwsCredentials = AwsBasicCredentials.create(s3AccessKey, s3SecretKey)
}
val regionProvider = new AwsRegionProvider {
override def getRegion: Region = Region.of(s3Region)
}
val settings: S3Settings = S3Ext(system).settings
.withEndpointUrl(s3Host)
.withBufferType(MemoryBufferType)
.withCredentialsProvider(credentialsProvider)
.withListBucketApiVersion(ApiVersion.ListBucketVersion2)
.withS3RegionProvider(regionProvider)
val attributes: Attributes = S3Attributes.settings(settings)
S3.makeBucketSource(bucketName)
.withAttributes(attributes)
.runWith(Sink.head)(materializer) map { _ =>
"bucket created"
}
}
}
application.conf
中的配置如下所示:
akka.stream.alpakka.s3 {
aws {
credentials {
provider = static
access-key-id = "access_key"
secret-access-key = "secret_key"
}
region {
provider = static
default-region = "eu-central-1"
}
}
endpoint-url = "http://localhost:9000"
}
如果使用服务的第一种方法 (makeBucket(...)
) 我看到这个错误:
SdkClientException: Unable to load region from any of the providers in the chain software.amazon.awssdk.regions.providers.DefaultAwsRegionProviderChain@34cb16dc:
[software.amazon.awssdk.regions.providers.SystemSettingsRegionProvider@804e08b: Unable to load region from system settings. Region must be specified either via environment variable (AWS_REGION) or system property (aws.region)., software.amazon.awssdk.regions.providers.AwsProfileRegionProvider@4d5f4b4d: No region provided in profile: default, software.amazon.awssdk.regions.providers.InstanceProfileRegionProvider@557feb58: Unable to contact EC2 metadata service.]
错误消息非常准确,我知道出了什么问题,但我只是不知道该怎么办,因为我指定了文档中概述的设置。有什么想法吗?
在服务的第二种方法(makeBucket2(...)
)中,我尝试明确设置 S3 设置,但这似乎也不起作用。错误如下所示:
play.api.http.HttpErrorHandlerExceptions$$anon: Execution exception[[S3Exception: 404 page not found
]]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:335)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:253)
at play.core.server.AkkaHttpServer$$anonfun.applyOrElse(AkkaHttpServer.scala:424)
at play.core.server.AkkaHttpServer$$anonfun.applyOrElse(AkkaHttpServer.scala:420)
at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:453)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run(BatchingExecutor.scala:92)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:92)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:47)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:47)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
Caused by: akka.stream.alpakka.s3.S3Exception: 404 page not found
这里似乎根本没有考虑定义的设置,因为似乎找不到该服务。这实际上是我在以前的软件版本中使用的方法,我使用 akka-stream-alpakka-s3
版本 1.1.2 并且它按预期工作。
当然,我不仅想使用 Alpakka S3 来创建存储桶,而且为了这个展示和概述我的问题,我只用这个例子来保持简单。我想,如果解决了这个问题,alpakka 提供的所有其他方法都会起作用。
我真的把文档刷了好几次了,但我还是没能解决这个问题,所以我希望这里有人能帮助我。
至少从 2.0.0 开始,Alpakka S3 的配置路径现在是 alpakka.s3
而不是 akka.stream.alpakka.s3
。
alpakka.s3 {
aws {
credentials {
provider = static
access-key-id = "access_key"
secret-access-key = "secret_key"
}
region {
provider = static
default-region = "eu-central-1"
}
}
endpoint-url = "http://localhost:9000"
}
我在 lightbend 论坛上得到了帮助 here。
问题已通过设置以下参数解决:
alpakka.s3.path-style-access = true
由于文档说这个值将被弃用,我没有考虑指定它。
在我原来的 post 中,我概述了两种设置参数的方法,一种是通过 application.conf
,另一种是通过 S3Ext
以编程方式。首先通过设置如上所示的值来工作,第二种方法如下所示:
val settings: S3Settings = S3Ext(system).settings
.withEndpointUrl(s3Host)
.withBufferType(MemoryBufferType)
.withCredentialsProvider(credentialsProvider)
.withListBucketApiVersion(ApiVersion.ListBucketVersion2)
.withS3RegionProvider(regionProvider)
.withPathStyleAccess(true)
最后一行很重要,尽管我收到了弃用警告。
但最终,这解决了问题。
我正在尝试使用 Alpakka S3 连接到一个 minio 实例以存储文件,但是我 运行 遇到了问题,因为我将库版本从 1.1.2
升级到 2.0.0
.
这是一个简单的服务 class,只有两种尝试创建存储桶的方法。我尝试了两种方法,首先从本地配置文件加载 alpakka 设置(在我的例子中是 application.conf
),其次是通过 S3Ext
.
两种方法都失败了,我不确定问题所在。关于错误,好像设置没有正确加载,但我不知道我在这里做错了什么。
我在用什么:
- 玩框架2.8.1
- scala 2.13.2
- akka-stream-alpakka-s3 2.0.0
这是服务 class:
package services
import akka.actor.ActorSystem
import akka.stream.alpakka.s3._
import akka.stream.alpakka.s3.scaladsl.S3
import akka.stream.scaladsl.Sink
import akka.stream.{Attributes, Materializer}
import javax.inject.{Inject, Singleton}
import software.amazon.awssdk.auth.credentials.{AwsBasicCredentials, AwsCredentials, AwsCredentialsProvider}
import software.amazon.awssdk.regions.Region
import software.amazon.awssdk.regions.providers.AwsRegionProvider
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
@Singleton
class AlpakkaS3PlaygroundService @Inject()(
materializer: Materializer,
system: ActorSystem,
) {
def makeBucket(bucketName: String): Future[String] = {
S3.makeBucket(bucketName)(materializer) map { _ =>
"bucket created"
}
}
def makeBucket2(bucketName: String): Future[String] = {
val s3Host = "http://localhost:9000"
val s3AccessKey = "access_key"
val s3SecretKey = "secret_key"
val s3Region = "eu-central-1"
val credentialsProvider = new AwsCredentialsProvider {
override def resolveCredentials(): AwsCredentials = AwsBasicCredentials.create(s3AccessKey, s3SecretKey)
}
val regionProvider = new AwsRegionProvider {
override def getRegion: Region = Region.of(s3Region)
}
val settings: S3Settings = S3Ext(system).settings
.withEndpointUrl(s3Host)
.withBufferType(MemoryBufferType)
.withCredentialsProvider(credentialsProvider)
.withListBucketApiVersion(ApiVersion.ListBucketVersion2)
.withS3RegionProvider(regionProvider)
val attributes: Attributes = S3Attributes.settings(settings)
S3.makeBucketSource(bucketName)
.withAttributes(attributes)
.runWith(Sink.head)(materializer) map { _ =>
"bucket created"
}
}
}
application.conf
中的配置如下所示:
akka.stream.alpakka.s3 {
aws {
credentials {
provider = static
access-key-id = "access_key"
secret-access-key = "secret_key"
}
region {
provider = static
default-region = "eu-central-1"
}
}
endpoint-url = "http://localhost:9000"
}
如果使用服务的第一种方法 (makeBucket(...)
) 我看到这个错误:
SdkClientException: Unable to load region from any of the providers in the chain software.amazon.awssdk.regions.providers.DefaultAwsRegionProviderChain@34cb16dc:
[software.amazon.awssdk.regions.providers.SystemSettingsRegionProvider@804e08b: Unable to load region from system settings. Region must be specified either via environment variable (AWS_REGION) or system property (aws.region)., software.amazon.awssdk.regions.providers.AwsProfileRegionProvider@4d5f4b4d: No region provided in profile: default, software.amazon.awssdk.regions.providers.InstanceProfileRegionProvider@557feb58: Unable to contact EC2 metadata service.]
错误消息非常准确,我知道出了什么问题,但我只是不知道该怎么办,因为我指定了文档中概述的设置。有什么想法吗?
在服务的第二种方法(makeBucket2(...)
)中,我尝试明确设置 S3 设置,但这似乎也不起作用。错误如下所示:
play.api.http.HttpErrorHandlerExceptions$$anon: Execution exception[[S3Exception: 404 page not found
]]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:335)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:253)
at play.core.server.AkkaHttpServer$$anonfun.applyOrElse(AkkaHttpServer.scala:424)
at play.core.server.AkkaHttpServer$$anonfun.applyOrElse(AkkaHttpServer.scala:420)
at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:453)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run(BatchingExecutor.scala:92)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:92)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:47)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:47)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
Caused by: akka.stream.alpakka.s3.S3Exception: 404 page not found
这里似乎根本没有考虑定义的设置,因为似乎找不到该服务。这实际上是我在以前的软件版本中使用的方法,我使用 akka-stream-alpakka-s3
版本 1.1.2 并且它按预期工作。
当然,我不仅想使用 Alpakka S3 来创建存储桶,而且为了这个展示和概述我的问题,我只用这个例子来保持简单。我想,如果解决了这个问题,alpakka 提供的所有其他方法都会起作用。
我真的把文档刷了好几次了,但我还是没能解决这个问题,所以我希望这里有人能帮助我。
至少从 2.0.0 开始,Alpakka S3 的配置路径现在是 alpakka.s3
而不是 akka.stream.alpakka.s3
。
alpakka.s3 {
aws {
credentials {
provider = static
access-key-id = "access_key"
secret-access-key = "secret_key"
}
region {
provider = static
default-region = "eu-central-1"
}
}
endpoint-url = "http://localhost:9000"
}
我在 lightbend 论坛上得到了帮助 here。
问题已通过设置以下参数解决:
alpakka.s3.path-style-access = true
由于文档说这个值将被弃用,我没有考虑指定它。
在我原来的 post 中,我概述了两种设置参数的方法,一种是通过 application.conf
,另一种是通过 S3Ext
以编程方式。首先通过设置如上所示的值来工作,第二种方法如下所示:
val settings: S3Settings = S3Ext(system).settings
.withEndpointUrl(s3Host)
.withBufferType(MemoryBufferType)
.withCredentialsProvider(credentialsProvider)
.withListBucketApiVersion(ApiVersion.ListBucketVersion2)
.withS3RegionProvider(regionProvider)
.withPathStyleAccess(true)
最后一行很重要,尽管我收到了弃用警告。
但最终,这解决了问题。