Netezza 与 Spark / Scala 的连接 JDBC
Netezza connection with Spark / Scala JDBC
我已经在我的 Windows 机器上使用 IntelliJ IDE 上的 Scala 2.11.8 设置了 Spark 2.2.0。我正在尝试使用 JDBC 驱动程序让 Spark 连接到 Netezza。
我通读了 this link 并通过 Maven 将 com.ibm.spark.netezza
jars 添加到我的项目中。我尝试 运行 下面的 Scala 脚本只是为了测试连接:
package jdbc
object SimpleScalaSpark {
def main(args: Array[String]) {
import org.apache.spark.sql.{SparkSession, SQLContext}
import com.ibm.spark.netezza
val spark = SparkSession.builder
.master("local")
.appName("SimpleScalaSpark")
.getOrCreate()
val sqlContext = SparkSession.builder()
.appName("SimpleScalaSpark")
.master("local")
.getOrCreate()
val nzoptions = Map("url" -> "jdbc:netezza://SERVER:5480/DATABASE",
"user" -> "USER",
"password" -> "PASSWORD",
"dbtable" -> "ADMIN.TABLENAME")
val df = sqlContext.read.format("com.ibm.spark.netezza").options(nzoptions).load()
}
}
但是我收到以下错误:
17/07/27 16:28:17 ERROR NetezzaJdbcUtils$: Couldn't find class org.netezza.Driver
java.lang.ClassNotFoundException: org.netezza.Driver
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:38)
at com.ibm.spark.netezza.NetezzaJdbcUtils$$anonfun$getConnector.apply(NetezzaJdbcUtils.scala:49)
at com.ibm.spark.netezza.NetezzaJdbcUtils$$anonfun$getConnector.apply(NetezzaJdbcUtils.scala:46)
at com.ibm.spark.netezza.DefaultSource.createRelation(DefaultSource.scala:50)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:306)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)
at jdbc.SimpleScalaSpark$.main(SimpleScalaSpark.scala:20)
at jdbc.SimpleScalaSpark.main(SimpleScalaSpark.scala)
Exception in thread "main" java.sql.SQLException: No suitable driver found for jdbc:netezza://SERVER:5480/DATABASE
at java.sql.DriverManager.getConnection(DriverManager.java:689)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at com.ibm.spark.netezza.NetezzaJdbcUtils$$anonfun$getConnector.apply(NetezzaJdbcUtils.scala:54)
at com.ibm.spark.netezza.NetezzaJdbcUtils$$anonfun$getConnector.apply(NetezzaJdbcUtils.scala:46)
at com.ibm.spark.netezza.DefaultSource.createRelation(DefaultSource.scala:50)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:306)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)
at jdbc.SimpleScalaSpark$.main(SimpleScalaSpark.scala:20)
at jdbc.SimpleScalaSpark.main(SimpleScalaSpark.scala)
我有两个想法:
1) 我不相信我实际上安装了任何 Netezza JDBC 驱动程序,尽管我认为我从上面的 link 带入我的项目的 jar 已经足够了。我只是缺少驱动程序还是缺少 Scala 脚本中的某些内容?
2) 在同一个 link 中,作者提到启动 Netezza Spark 包:
For example, to use the Spark Netezza package with Spark’s interactive
shell, start it as shown below:
$SPARK_HOME/bin/spark-shell –packages
com.ibm.SparkTC:spark-netezza_2.10:0.1.1
–driver-class-path~/nzjdbc.jar
我不认为我在我的脚本中调用了 jdbc
以外的任何包。我必须将它添加到我的脚本中吗?
谢谢!
我认为你的第一个想法是正确的。如果您尚未安装 Netezza JDBC 驱动程序,则几乎肯定需要安装该驱动程序。
来自您发布的link:
This package can be deployed as part of an application program or from
Spark tools such as spark-shell, spark-sql. To use the package in the
application, you have to specify it in your application’s build
dependency. When using from Spark tools, add the package using
–packages command line option. Netezza JDBC driver also should be
added to the application dependencies.
您必须自己下载 Netezza 驱动程序,并且需要支持权利才能访问它(通过 IBM 的 Fix Central 或 Passport Advantage)。它包含在 Windows driver/client 支持包或 linux 驱动程序包中。
我已经在我的 Windows 机器上使用 IntelliJ IDE 上的 Scala 2.11.8 设置了 Spark 2.2.0。我正在尝试使用 JDBC 驱动程序让 Spark 连接到 Netezza。
我通读了 this link 并通过 Maven 将 com.ibm.spark.netezza
jars 添加到我的项目中。我尝试 运行 下面的 Scala 脚本只是为了测试连接:
package jdbc
object SimpleScalaSpark {
def main(args: Array[String]) {
import org.apache.spark.sql.{SparkSession, SQLContext}
import com.ibm.spark.netezza
val spark = SparkSession.builder
.master("local")
.appName("SimpleScalaSpark")
.getOrCreate()
val sqlContext = SparkSession.builder()
.appName("SimpleScalaSpark")
.master("local")
.getOrCreate()
val nzoptions = Map("url" -> "jdbc:netezza://SERVER:5480/DATABASE",
"user" -> "USER",
"password" -> "PASSWORD",
"dbtable" -> "ADMIN.TABLENAME")
val df = sqlContext.read.format("com.ibm.spark.netezza").options(nzoptions).load()
}
}
但是我收到以下错误:
17/07/27 16:28:17 ERROR NetezzaJdbcUtils$: Couldn't find class org.netezza.Driver
java.lang.ClassNotFoundException: org.netezza.Driver
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:38)
at com.ibm.spark.netezza.NetezzaJdbcUtils$$anonfun$getConnector.apply(NetezzaJdbcUtils.scala:49)
at com.ibm.spark.netezza.NetezzaJdbcUtils$$anonfun$getConnector.apply(NetezzaJdbcUtils.scala:46)
at com.ibm.spark.netezza.DefaultSource.createRelation(DefaultSource.scala:50)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:306)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)
at jdbc.SimpleScalaSpark$.main(SimpleScalaSpark.scala:20)
at jdbc.SimpleScalaSpark.main(SimpleScalaSpark.scala)
Exception in thread "main" java.sql.SQLException: No suitable driver found for jdbc:netezza://SERVER:5480/DATABASE
at java.sql.DriverManager.getConnection(DriverManager.java:689)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at com.ibm.spark.netezza.NetezzaJdbcUtils$$anonfun$getConnector.apply(NetezzaJdbcUtils.scala:54)
at com.ibm.spark.netezza.NetezzaJdbcUtils$$anonfun$getConnector.apply(NetezzaJdbcUtils.scala:46)
at com.ibm.spark.netezza.DefaultSource.createRelation(DefaultSource.scala:50)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:306)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)
at jdbc.SimpleScalaSpark$.main(SimpleScalaSpark.scala:20)
at jdbc.SimpleScalaSpark.main(SimpleScalaSpark.scala)
我有两个想法:
1) 我不相信我实际上安装了任何 Netezza JDBC 驱动程序,尽管我认为我从上面的 link 带入我的项目的 jar 已经足够了。我只是缺少驱动程序还是缺少 Scala 脚本中的某些内容?
2) 在同一个 link 中,作者提到启动 Netezza Spark 包:
For example, to use the Spark Netezza package with Spark’s interactive shell, start it as shown below:
$SPARK_HOME/bin/spark-shell –packages com.ibm.SparkTC:spark-netezza_2.10:0.1.1 –driver-class-path~/nzjdbc.jar
我不认为我在我的脚本中调用了 jdbc
以外的任何包。我必须将它添加到我的脚本中吗?
谢谢!
我认为你的第一个想法是正确的。如果您尚未安装 Netezza JDBC 驱动程序,则几乎肯定需要安装该驱动程序。
来自您发布的link:
This package can be deployed as part of an application program or from Spark tools such as spark-shell, spark-sql. To use the package in the application, you have to specify it in your application’s build dependency. When using from Spark tools, add the package using –packages command line option. Netezza JDBC driver also should be added to the application dependencies.
您必须自己下载 Netezza 驱动程序,并且需要支持权利才能访问它(通过 IBM 的 Fix Central 或 Passport Advantage)。它包含在 Windows driver/client 支持包或 linux 驱动程序包中。