Hadoop 3.3.0:RPC 响应长度无效

Hadoop 3.3.0: RPC response has invalid length

我刚刚通过 Homebrew 安装了 PySpark,目前正在尝试将东西放入 Hadoop。

问题

与 Hadoop 的任何交互都失败。

我按照a tutorial在MacOS上设置了Hadoop3.3.0。

即使我在某些版本(特定 JDK、MySQL 等)中修复了唯一的问题,它也不知何故无法解决。

每当我尝试 运行 任何与 Hadoop 相关的命令时,我都会收到:

▶ hadoop fs -ls /
2021-05-12 07:45:44,647 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ls: RPC response has invalid length

运行 this code 在笔记本中:

from pyspark.sql.session import SparkSession

# https://saagie.zendesk.com/hc/en-us/articles/360029759552-PySpark-Read-and-Write-Files-from-HDFS
sparkSession = SparkSession.builder.appName("example-pyspark-read-and-write").getOrCreate()
# Create data
data = [('First', 1), ('Second', 2), ('Third', 3), ('Fourth', 4), ('Fifth', 5)]
df = sparkSession.createDataFrame(data)

# Write into HDFS
df.write.csv("hdfs://localhost:9000/cluster/example.csv")
# Read from HDFS
df_load = sparkSession.read.csv("hdfs://localhost:9000/cluster/example.csv")
df_load.show()

sc.stop()

扔给我

Py4JJavaError                             Traceback (most recent call last)
<ipython-input-5-e25cae5a6cac> in <module>
      8 
      9 # Write into HDFS
---> 10 df.write.csv("hdfs://localhost:9000/cluster/example.csv")
     11 # Read from HDFS
     12 df_load = sparkSession.read.csv("hdfs://localhost:9000/cluster/example.csv")

/usr/local/Cellar/apache-spark/3.1.1/libexec/python/pyspark/sql/readwriter.py in csv(self, path, mode, compression, sep, quote, escape, header, nullValue, escapeQuotes, quoteAll, dateFormat, timestampFormat, ignoreLeadingWhiteSpace, ignoreTrailingWhiteSpace, charToEscapeQuoteEscaping, encoding, emptyValue, lineSep)
   1369                        charToEscapeQuoteEscaping=charToEscapeQuoteEscaping,
   1370                        encoding=encoding, emptyValue=emptyValue, lineSep=lineSep)
-> 1371         self._jwrite.csv(path)
   1372 
   1373     def orc(self, path, mode=None, partitionBy=None, compression=None):

/usr/local/lib/python3.9/site-packages/py4j/java_gateway.py in __call__(self, *args)
   1307 
   1308         answer = self.gateway_client.send_command(command)
-> 1309         return_value = get_return_value(
   1310             answer, self.gateway_client, self.target_id, self.name)
   1311 

/usr/local/Cellar/apache-spark/3.1.1/libexec/python/pyspark/sql/utils.py in deco(*a, **kw)
    109     def deco(*a, **kw):
    110         try:
--> 111             return f(*a, **kw)
    112         except py4j.protocol.Py4JJavaError as e:
    113             converted = convert_exception(e.java_exception)

/usr/local/lib/python3.9/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    324             value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
    325             if answer[1] == REFERENCE_TYPE:
--> 326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
    328                     format(target_id, ".", name), value)

Py4JJavaError: An error occurred while calling o99.csv.
: java.io.IOException: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response has invalid length; Host Details : local host is: "blkpingu16-MBP.fritz.box/192.xxx.xxx.xx"; destination host is: "localhost":9000; 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:816)
    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
    ...
    at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:979)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:567)
    ...
    at java.base/java.lang.Thread.run(Thread.java:830)
Caused by: org.apache.hadoop.ipc.RpcException: RPC response has invalid length
    at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1827)
    at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1173)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1069)

它是:RPC response has invalid length

我已经在各种配置文件中配置并验证了我的所有路径,例如

核心-site.xml

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>ipc.maximum.data.length</name>
<value>134217728</value>
</property>
</configuration>

.zshrc

JAVA_HOME="/Library/Java/JavaVirtualMachines/jdk1.8.0_201.jdk/Contents/Home"

...

## JAVA env variablesexport JAVA_HOME="/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home"
export PATH=$PATH:$JAVA_HOME/bin

## HADOOP env variables
export HADOOP_HOME="/usr/local/Cellar/hadoop/3.3.0/libexec"
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export HADOOP_CLASSPATH=${JAVA_HOME}/lib/tools.jar

## HIVE env variables
export HIVE_HOME=/usr/local/Cellar/hive/3.1.2_3/libexec
export PATH=$PATH:/$HIVE_HOME/bin

## MySQL ENV
export PATH=$PATH:/usr/local/Cellar/mysql/8.0.23_1/bin

Hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

hadoop-env.sh

export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_141.jdk/Contents/Home

如果我启动 Hadoop,它似乎会启动所有节点:

▶ $HADOOP_HOME/sbin/start-all.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [blkpingu16-MBP.fritz.box]
2021-05-12 08:18:15,786 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting resourcemanager
Starting nodemanagers

jps 表明 Hadoop 的东西是 运行ning,还有一些 Spark 的东西

▶ jps
166 Jps
99750 ResourceManager
99544 SecondaryNameNode
99851 NodeManager
98154 SparkSubmit
99405 DataNode
39326 Master

http://localhost:8088/cluster 可用并显示 Hadoop 仪表板(Yarn,根据 the tutorial I followedhttp://localhost:8080 可用并显示 Spark 仪表板 http://localhost:9870 不可用(应该给我看一些 Hadoop 相关的东西)

我的主要问题是我不知道为什么我的名称节点不在那里,因为它应该在那里以及随后为什么我无法与 HDFS 通信以便通过命令行与其交互(将数据放入其中) 或通过笔记本请求数据。 Hadoop 相关的东西坏了,我不知道如何修复它。

我今天遇到了同样的问题,如果有人遇到类似的问题,我想在这里注明。一个快速命令 jps 告诉我 NameNode 进程不存在——尽管没有出现警告或错误。

我在Hadoop的NameNode的.log文件中发现,有一个java.net.BindException: Problem binding to [localhost:9000],这让我认为端口9000被另一个进程使用了​​。我使用 this source 中的命令来检查打开的端口,实际上它被 python 进程使用(我当时 运行 只有 PySpark)。 (sudo lsof -i -P -n | grep LISTEN 顺便说一句,任何人都需要)

解决方案非常简单:将etc/core-site.xmlfs.defaultFS字段中的端口号更改为另一个未使用的端口(我的是9900)。