从 s3 复制到 hdfs 时 distcp 失败

distcp fails when copying from s3 to hdfs

创建了集群 (Spark Amazon EMR) 并尝试在命令行中 运行。

CLI:

hadoop distcp s3a://bucket/file1 /data

异常:

org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateExceptionImpl(SerializedExceptionPBImpl.java:171)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:182)
        at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:162)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:408)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

请在/etc/hadoop/conf/yarn-site.xml,

中检查yarn-site.xml的属性
 <property>
  <name>yarn.nodemanager.aux-services</name> 
  <value>mapreduce_shuffle,spark_shuffle</value>
 </property>

 <property>
   <name>yarn.nodemanager.aux-services.spark_shuffle.class</name> 
   <value>org.apache.spark.network.yarn.YarnShuffleService</value> 
 </property>

<property>
  <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> 
  <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

如果 mapreduce_shuffle 不存在,请添加属性并重新启动 yarn 服务。

sudo stop hadoop-yarn-nodemanager
sudo start hadoop-yarn-nodemanager

我建议使用 s3-distcp 实用程序,因为它已经在 EMR 集群中可用。

s3-dist-cp --src s3://my-tables/incoming/hourly_table --dest /data/hdfslocation/path

https://aws.amazon.com/blogs/big-data/seven-tips-for-using-s3distcp-on-amazon-emr-to-move-data-efficiently-between-hdfs-and-amazon-s3/