错误 运行 Yarn Jar MRAppMaster NoSuchMethodERror

Error Running Yarn Jar MRAppMaster NoSuchMethodERror

我运行没主意了....我尝试了很多配置,但没有任何效果。我试图通过我的 hadoop 集群上的纱线 运行 一个 jar 文件只是为了得到:

2020-10-07 21:27:01,960 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1602101475531_0003_000002
2020-10-07 21:27:02,145 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: 
/************************************************************
[system properties]
###
************************************************************/
2020-10-07 21:27:02,149 ERROR [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.lang.NoSuchMethodError: com/google/common/base/Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V (loaded from file:/data/hadoop/yarn/usercache/hdfs-user/appcache/application_1602101475531_0003/filecache/11/job.jar/job.jar by sun.misc.Launcher$AppClassLoader@8da96717) called from class org.apache.hadoop.conf.Configuration (loaded from file:/data/hadoop-3.3.0/share/hadoop/common/hadoop-common-3.3.0.jar by sun.misc.Launcher$AppClassLoader@8da96717).
    at org.apache.hadoop.conf.Configuration.set(Configuration.java:1380)
    at org.apache.hadoop.conf.Configuration.set(Configuration.java:1361)
    at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1690)
2020-10-07 21:27:02,152 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.lang.NoSuchMethodError: com/google/common/base/Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V (loaded from file:/data/hadoop/yarn/usercache/hdfs-user/appcache/application_1602101475531_0003/filecache/11/job.jar/job.jar by sun.misc.Launcher$AppClassLoader@8da96717) called from class org.apache.hadoop.conf.Configuration (loaded from file:/data/hadoop-3.3.0/share/hadoop/common/hadoop-common-3.3.0.jar by sun.misc.Launcher$AppClassLoader@8da96717).

我的地图-site.xml:

<configuration>
    <property>
        <name>mapreduce.cluster.temp.dir</name>
        <value>/tmp/hadoop-mapred</value>
        <final>true</final>
    </property>

    <property>
        <name>mapred.job.tracker</name>
        <value>###</value>
    </property>

    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
        <description>The runtime framework for executing MapReduce jobs.
            Can be one of local, classic or yarn.
        </description>
    </property>

    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>3072</value>
    </property>

    <property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>2048</value>
    </property>

    <property>
        <name>mapreduce.shuffle.port</name>
        <value>5010</value>
    </property>

    <property>
        <name>mapreduce.task.io.sort.mb</name>
        <value>256</value>
    </property>

    <property>
        <name>mapreduce.task.io.sort.factor</name>
        <value>64</value>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.env</name>
        <value>HADOOP_MAPRED_HOME=/data/hadoop-3.3.0</value>
    </property>
    <property>
        <name>mapreduce.map.env</name>
        <value>HADOOP_MAPRED_HOME=/data/hadoop-3.3.0</value>
    </property>
    <property>
       <name>mapreduce.reduce.env</name>
       <value>HADOOP_MAPRED_HOME=/data/hadoop-3.3.0</value>
    </property>
    <property>
        <name>mapreduce.application.classpath</name>
        <value>/data/hadoop-3.3.0/etc/hadoop:/data/hadoop-3.3.0/share/hadoop/common/lib/*:/data/hadoop-3.3.0/share/hadoop/common/*:/data/hadoop-3.3.0/share/hadoop/hdfs:/data/hadoop-3.3.0/share/hadoop/hdfs/lib/*:/data/hadoop-3.3.0/share/hadoop/hdfs/*:/data/hadoop-3.3.0/share/hadoop/mapreduce/*:/data/hadoop-3.3.0/share/hadoop/yarn:/data/hadoop-3.3.0/share/hadoop/yarn/lib/*:/data/hadoop-3.3.0/share/hadoop/yarn/*</value>
     </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>###</value> <!-- hostname of machine  where jobhistory service is started -->
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>###</value>
    </property>


</configuration>

和纱线-site.xml:

<configuration>

<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <property>
        <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>

    <property>
        <name>yarn.resourcemanager.scheduler.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
    </property>

    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>###</value>
        <description>Enter your ResourceManager hostname.</description>
    </property>

    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>###</value>
        <description>Enter your ResourceManager hostname.</description>
    </property>

    <property>
        <name>yarn.resourcemanager.address</name>
        <value>###</value>
        <description>Enter your ResourceManager hostname.</description>
    </property>

    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>###</value>
        <description>Enter your ResourceManager hostname.</description>
    </property>

    <property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>/data/hadoop/yarn</value>
        <description>Comma separated list of paths. Use the list of directories from $YARN_LOCAL_DIR.For example, /grid/hadoop/yarn/local,/grid1/hadoop/yarn/ local.</description>
    </property>

    <property>
        <name>yarn.nodemanager.log-dirs</name>
        <value>/data/hadoop/yarn-logs</value>
        <description>Use the list of directories from $YARN_LOCAL_LOG_DIR. For example, /grid/hadoop/yarn/log,/grid1/hadoop/yarn/ log,/grid2/hadoop/yarn/log</description>
    </property>

    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>###</value>
        <description>URL for job history server</description>
    </property>

    <property>
        <name>yarn.timeline-service.webapp.address</name>
        <value>###</value>
    </property>

    <property>
        <name>yarn.application.classpath</name>
        <value>/data/hadoop-3.3.0/share/hadoop/mapreduce/*,/data/hadoop-3.3.0/share/hadoop/mapreduce/lib/*,/data/hadoop-3.3.0/share/hadoop/common/*,/data/hadoop-3.3.0/share/hadoop/common/lib/*,/data/hadoop-3.3.0/share/hadoop/hdfs/*,/data/hadoop-3.3.0/share/hadoop/hdfs/lib/*,/data/hadoop-3.3.0/share/hadoop/yarn/*,/data/hadoop-3.3.0/share/hadoop/yarn/lib/*</value>
    </property>

</configuration>

它总是在最后阶段失败...在我的 MapReduce 程序几乎完整 运行 之后。非常感谢任何想法... 运行 Apache Hadoop 3.3.0

您的 Google guava 版本似乎太旧 (< 20.0) 或不匹配(多个 jar 版本)。确保您没有将多个版本加载到 HADOOP_CLASSPATH.

通过发出以下命令查找番石榴版本:

find /usr/local/Cellar/hadoop -name guava*.jar -type f
/usr/local/Cellar/hadoop/3.3.0/libexec/share/hadoop/yarn/csi/lib/guava-20.0.jar
/usr/local/Cellar/hadoop/3.3.0/libexec/share/hadoop/common/lib/guava-27.0-jre.jar
/usr/local/Cellar/hadoop/3.3.0/libexec/share/hadoop/hdfs/lib/guava-27.0-jre.jar

如果您使用的是 Maven,请使用:

mvn dependency:tree | less