Intellij 中的 Spark Scala 错误:无法在 Hadoop 二进制文件中找到可执行文件 null\bin\winutils.exe

Spark Scala error in Intellij : Could not locate executable null\bin\winutils.exe in the Hadoop binaries

我想要 运行 下面的代码将 CSV 加载到 IntelliJ 中的 Spark 数据帧中。

import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}

object ReadCSVFile {

  case class Employee(empno:String, ename:String, designation:String, manager:String, 
hire_date:String, sal:String , deptno:String)

  def main(args : Array[String]): Unit = {
    var conf = new SparkConf().setAppName("Read CSV File").setMaster("local[*]")
    val sc = new SparkContext(conf)
    val sqlContext = new SQLContext(sc)
    import sqlContext.implicits._

    val textRDD = sc.textFile("src\main\resources\emp_data.csv")
    //println(textRDD.foreach(println)

    val empRdd = textRDD.map {
      line =>
        val col = line.split(",")
        Employee(col(0), col(1), col(2), col(3), col(4), col(5), col(6))
    }
    val empDF = empRdd.toDF()
    empDF.show()

  }
}

这也是 Intellij 的屏幕,因此您可以了解我的项目结构:

最后一件事是我的 build.sbt 文件:

ThisBuild / version := "0.1.0-SNAPSHOT"

ThisBuild / scalaVersion := "2.12.15"

libraryDependencies++=Seq(
  "org.apache.spark"%"spark-core_2.12"%"2.4.5",
  "org.apache.spark"%"spark-sql_2.12"%"2.4.5"
)

我正在使用 Oracle OpenJDK 版本 17.0.2 作为 SDK。

当我执行我的代码时,出现以下错误:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
22/02/28 13:49:17 INFO SparkContext: Running Spark version 2.4.5
22/02/28 13:49:17 WARN NativeCodeLoader: Unable to load native-hadoop library for your 
platform... using builtin-java classes where applicable
22/02/28 13:49:17 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:378)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:393)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:386)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79)
at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:116)
at org.apache.hadoop.security.Groups.<init>(Groups.java:93)
at org.apache.hadoop.security.Groups.<init>(Groups.java:73)
at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:293)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:789)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
at org.apache.spark.util.Utils$.$anonfun$getCurrentUserName(Utils.scala:2422)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2422)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:293)
at ReadCSVFile$.main(ReadCSVFile.scala:10)
at ReadCSVFile.main(ReadCSVFile.scala)
22/02/28 13:49:17 INFO SparkContext: Submitted application: Read CSV File
22/02/28 13:49:18 INFO SecurityManager: Changing view acls to: adamuser
22/02/28 13:49:18 INFO SecurityManager: Changing modify acls to: adamuser
22/02/28 13:49:18 INFO SecurityManager: Changing view acls groups to: 
22/02/28 13:49:18 INFO SecurityManager: Changing modify acls groups to: 
22/02/28 13:49:18 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(adamuser); groups with view permissions: Set(); users  with modify permissions: Set(adamuser); groups with modify permissions: Set()
22/02/28 13:49:18 INFO Utils: Successfully started service 'sparkDriver' on port 52614.
22/02/28 13:49:18 INFO SparkEnv: Registering MapOutputTracker
22/02/28 13:49:18 INFO SparkEnv: Registering BlockManagerMaster
22/02/28 13:49:18 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
22/02/28 13:49:18 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
22/02/28 13:49:18 INFO DiskBlockManager: Created local directory at 
C:\Users\adamuser\AppData\Local\Temp\blockmgr-c2b9fbdc-cd1a-4a49-8e60-10d3a0f21f3a
22/02/28 13:49:18 INFO MemoryStore: MemoryStore started with capacity 1032.0 MB
22/02/28 13:49:18 INFO SparkEnv: Registering OutputCommitCoordinator
22/02/28 13:49:19 INFO Utils: Successfully started service 'SparkUI' on port 4040.
22/02/28 13:49:19 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://SH-P101.s-h.local:4040
22/02/28 13:49:19 INFO Executor: Starting executor ID driver on host localhost
22/02/28 13:49:19 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 52615.
22/02/28 13:49:19 INFO NettyBlockTransferService: Server created on SH-P101.s-h.local:52615
22/02/28 13:49:19 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
22/02/28 13:49:19 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, SH-P101.s-h.local, 52615, None)
22/02/28 13:49:19 INFO BlockManagerMasterEndpoint: Registering block manager SH-P101.s-h.local:52615 with 1032.0 MB RAM, BlockManagerId(driver, SH-P101.s-h.local, 52615, None)
22/02/28 13:49:19 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, SH-P101.s-h.local, 52615, None)
22/02/28 13:49:19 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, SH-P101.s-h.local, 52615, None)
22/02/28 13:49:20 WARN BlockManager: Putting block broadcast_0 failed due to exception java.lang.reflect.InaccessibleObjectException: Unable to make field transient java.lang.Object[] java.util.ArrayList.elementData accessible: module java.base does not "opens java.util" to unnamed module @15c43bd9.
22/02/28 13:49:20 WARN BlockManager: Block broadcast_0 could not be removed as it was not found on disk or in memory
Exception in thread "main" java.lang.reflect.InaccessibleObjectException: Unable to make field transient java.lang.Object[] java.util.ArrayList.elementData accessible: module java.base does not "opens java.util" to unnamed module @15c43bd9
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178)
at java.base/java.lang.reflect.Field.setAccessible(Field.java:172)
at org.apache.spark.util.SizeEstimator$.$anonfun$getClassInfo(SizeEstimator.scala:336)
at org.apache.spark.util.SizeEstimator$.$anonfun$getClassInfo$adapted(SizeEstimator.scala:330)
at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
at org.apache.spark.util.SizeEstimator$.getClassInfo(SizeEstimator.scala:330)
at org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:222)
at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:201)
at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:69)
at org.apache.spark.util.collection.SizeTracker.takeSample(SizeTracker.scala:78)
at org.apache.spark.util.collection.SizeTracker.afterUpdate(SizeTracker.scala:70)
at org.apache.spark.util.collection.SizeTracker.afterUpdate$(SizeTracker.scala:67)
at org.apache.spark.util.collection.SizeTrackingVector.$plus$eq(SizeTrackingVector.scala:31)
at org.apache.spark.storage.memory.DeserializedValuesHolder.storeValue(MemoryStore.scala:665)
at org.apache.spark.storage.memory.MemoryStore.putIterator(MemoryStore.scala:222)
at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:299)
at org.apache.spark.storage.BlockManager.$anonfun$doPutIterator(BlockManager.scala:1165)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1091)
at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1156)
at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:914)
at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1481)
at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:123)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:88)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1489)
at org.apache.spark.SparkContext.$anonfun$hadoopFile(SparkContext.scala:1035)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:699)
at org.apache.spark.SparkContext.hadoopFile(SparkContext.scala:1027)
at org.apache.spark.SparkContext.$anonfun$textFile(SparkContext.scala:831)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:699)
at org.apache.spark.SparkContext.textFile(SparkContext.scala:828)
at ReadCSVFile$.main(ReadCSVFile.scala:14)
at ReadCSVFile.main(ReadCSVFile.scala)
22/02/28 13:49:20 INFO SparkContext: Invoking stop() from shutdown hook
22/02/28 13:49:20 INFO SparkUI: Stopped Spark web UI at http://SH-P101.s-h.local:4040
22/02/28 13:49:20 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
22/02/28 13:49:20 INFO MemoryStore: MemoryStore cleared
22/02/28 13:49:20 INFO BlockManager: BlockManager stopped
22/02/28 13:49:20 INFO BlockManagerMaster: BlockManagerMaster stopped
22/02/28 13:49:20 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
22/02/28 13:49:20 INFO SparkContext: Successfully stopped SparkContext
22/02/28 13:49:20 INFO ShutdownHookManager: Shutdown hook called
22/02/28 13:49:20 INFO ShutdownHookManager: Deleting directory 
C:\Users\adamuser\AppData\Local\Temp\spark-67d11686-a6b8-4744-ab04-2ac1623db9dd

Process finished with exit code 1

我试图解决第一个问题:“java.io.IOException:无法在 Hadoop 二进制文件中找到可执行文件 null\bin\winutils.exe。”

所以我下载了winutils.exe文件,把它放在C:\hadoop\bin里,设置HADOOP_HOME环境为C:\hadoop\bin,但是没有解决任何问题。我也尝试添加:

System.setProperty("hadoop.home.dir", "c/hadoop/bin")

...在我的代码中,但它也不起作用。

我没有找到关于第二个错误的任何信息,但我想找不到我的资源文件(可能路径定义不正确?)。

有人可以提出错误的地方和需要更改的地方吗?

评论中对两个问题的回答:

第一个:将 HADOOP_HOME 环境变量设置为不带“bin”的 C:\hadoop 并将 C:\hadoop\bin 附加到 PATH 环境变量。

第二个:使用 JDK 8 因为 Spark 不支持 JDK 17.