spark 和 aws redshift:java.sql.SQLException:找不到适合 jdbc:redshift://xxx.us-west-2.redshift.amazonaws.com:5439 的驱动程序

spark and aws redshift: java.sql.SQLException: No suitable driver found for jdbc:redshift://xxx.us-west-2.redshift.amazonaws.com:5439

os: 分os

火花:1.6.1

sbt: build.sbt

libraryDependencies ++= {
Seq(
    "org.apache.spark" %% "spark-core" % "1.6.1" % "provided",
    "com.amazonaws" % "aws-java-sdk" % "1.10.75",
    "com.amazonaws" % "amazon-kinesis-client" % "1.1.0",
    "com.amazon.redshift" % "jdbc4" % "1.1.7.1007" % "test"
)
}
resolvers ++= Seq(
    "redshift" at "https://s3.amazonaws.com/redshift-downloads/drivers/RedshiftJDBC4-1.1.7.1007.jar"
         )

spark 应用程序:

val redshiftDriver = "com.amazon.redshift.jdbc4.Driver"
Class.forName(redshiftDriver)

我已经指定了 redshift 驱动程序,并更新为 url 等,遵循此处的 AWS 官方文档:http://docs.aws.amazon.com/redshift/latest/mgmt/connecting-in-code.html

但我仍然收到以下错误:

java.sql.SQLException: No suitable driver found for jdbc:redshift://xxx.us-west-2.redshift.amazonaws.com:5439

我用谷歌搜索,有人说应该将 jar 添加到类路径?有人可以在这里帮忙吗?非常感谢

已解决:

只需清除所有缓存内容,然后从头开始重新构建所有内容,然后它就可以正常工作了

添加:

Databricks 实现了这个库,它可以让我们的生活更容易在 Spark 中交互红移 https://github.com/databricks/spark-redshift

// Get some data from a Redshift table
val df: DataFrame = sqlContext.read
    .format("com.databricks.spark.redshift")
    .option("url", "jdbc:redshift://redshifthost:5439/database?user=username&password=pass")
    .option("dbtable", "my_table")
    .option("tempdir", "s3n://path/for/temp/data")
    .load()