使用 PySpark 和 JDBC 驱动程序在 Python 中获取 oracle 数据时出现 "java.lang.IllegalArgumentException: requirement failed: Overflowed precision" 错误

the "java.lang.IllegalArgumentException: requirement failed: Overflowed precision" error on fetch oracle data in Python with PySpark and JDBC Driver

我尝试通过spark技术,PySpark工具连接oracle数据库。 火花 1.5,scala-2.10.4,Pyhton3.4,ojdbc7.jar 我没有安装oracle客户端,只是复制了oracle库并设置了LD_LIBRARY_PATH。 我测试过,工作正常,可以使用 os(Centos 7) 以及 R(使用 ROracle 包)和 Python3.4(使用 cx_Oracle)获取数据。 我在 PySpark 中使用了以下连接:

df=sqlContext.read.format('jdbc').options(url='jdbc:oracle:thin:UserName/Password@IP:1521/SID',dbtable="Table").load()

连接没有问题,但是当我尝试 df.head() 时,我遇到了这个错误

15/12/03 16:41:52 INFO SparkContext: Starting job: showString at NativeMethodAccessorImpl.java:-2
15/12/03 16:41:52 INFO DAGScheduler: Got job 2 (showString at NativeMethodAccessorImpl.java:-2) with 1 output partitions
15/12/03 16:41:52 INFO DAGScheduler: Final stage: ResultStage 2(showString at NativeMethodAccessorImpl.java:-2)
15/12/03 16:41:52 INFO DAGScheduler: Parents of final stage: List()
15/12/03 16:41:52 INFO DAGScheduler: Missing parents: List()
15/12/03 16:41:52 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[5] at showString at NativeMethodAccessorImpl.java:-2), which has no missing parents
15/12/03 16:41:52 INFO MemoryStore: ensureFreeSpace(5872) called with curMem=17325, maxMem=13335873454
15/12/03 16:41:52 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 5.7 KB, free 12.4 GB)
15/12/03 16:41:52 INFO MemoryStore: ensureFreeSpace(2789) called with curMem=23197, maxMem=13335873454
15/12/03 16:41:52 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.7 KB, free 12.4 GB)
15/12/03 16:41:52 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:41646 (size: 2.7 KB, free: 12.4 GB)
15/12/03 16:41:52 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:861
15/12/03 16:41:52 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[5] at showString at NativeMethodAccessorImpl.java:-2)
15/12/03 16:41:52 INFO TaskSchedulerImpl: Adding task set 2.0 with 1 tasks
15/12/03 16:41:52 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2, localhost, PROCESS_LOCAL, 1929 bytes)
15/12/03 16:41:52 INFO Executor: Running task 0.0 in stage 2.0 (TID 2)
15/12/03 16:41:52 INFO JDBCRDD: closed connection
15/12/03 16:41:52 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 2)
java.lang.IllegalArgumentException: requirement failed: Overflowed precision
...

当我搜索时,我发现它是在 github 中解决的错误,应该通过下面的代码行

解决
case java.sql.Types.NUMERIC       => DecimalType.bounded(precision + scala, scale)

但正如我检查的那样存在于我的 JDBCRDD.scala 文件中。
有什么办法可以解决这个问题吗?

我曾与一位 Spark 开发人员协商过,他告诉我这是一个错误,我们应该等待新版本或使用 jira spark 版本。