无法从 HDFS 中的数据创建 RDD

Unable to create RDD from data in HDFS

我想使用代码创建一个 RDD,但无法创建。有没有解决这个问题的方法。 我已经尝试 运行 它与 localhost:port 细节。我还尝试 运行 将其与 HDFS 的整个路径结合使用:/user/training/intel/NYSE.csv。 我正在使用的任何路径都只在本地目录而不是在 hdfs 上被搜索。 谢谢

scala> val myrdd = sc.textFile("/training/intel/NYSE.csv")
myrdd: org.apache.spark.rdd.RDD[String] = /training/intel/NYSE.csv MapPartitionsRDD[5] at textFile at <console>:24

scala> myrdd.collect
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/training/intel/NYSE.csv
  at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287)
  at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
  at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
  at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)
  at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:248)
  at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:246)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:248)
  at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:246)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1911)
  at org.apache.spark.rdd.RDD$$anonfun$collect.apply(RDD.scala:893)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
  at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
  at org.apache.spark.rdd.RDD.collect(RDD.scala:892)
  ... 48 elided

我还尝试了以下方法:

scala> val myrdd = sc.textFile("hdfs://localhost:8020/training/intel/NYSE.csv")
myrdd: org.apache.spark.rdd.RDD[String] = hdfs://localhost:8020/training/intel/NYSE.csv MapPartitionsRDD[7] at textFile at <console>:24

scala> myrdd.collect
java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).; Host Details : local host is: "hadoop/127.0.0.1"; destination host is: "localhost":8020;
  at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
  at org.apache.hadoop.ipc.Client.call(Client.java:1479)
  at org.apache.hadoop.ipc.Client.call(Client.java:1412)
  at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
  at com.sun.proxy.$Proxy24.getFileInfo(Unknown Source)
  at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606)
  at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
  at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
  at com.sun.proxy.$Proxy25.getFileInfo(Unknown Source)
  at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
  at org.apache.hadoop.hdfs.DistributedFileSystem.doCall(DistributedFileSystem.java:1305)
  at org.apache.hadoop.hdfs.DistributedFileSystem.doCall(DistributedFileSystem.java:1301)
  at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
  at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
  at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
  at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1674)
  at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:259)
  at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
  at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
  at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)
  at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:248)
  at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:246)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:248)
  at org.apache.spark.rdd.RDD$$anonfun$partitions.apply(RDD.scala:246)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1911)
  at org.apache.spark.rdd.RDD$$anonfun$collect.apply(RDD.scala:893)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
  at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
  at org.apache.spark.rdd.RDD.collect(RDD.scala:892)
  ... 48 elided
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).
  at com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:89)
  at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:108)
  at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.<init>(RpcHeaderProtos.java:2201)
  at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.<init>(RpcHeaderProtos.java:2165)
  at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.parsePartialFrom(RpcHeaderProtos.java:2295)
  at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.parsePartialFrom(RpcHeaderProtos.java:2290)
  at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:200)
  at com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:241)
  at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253)
  at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259)
  at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49)
  at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcHeaderProtos.java:3167)
  at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1086)
  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:979)

无论我如何 运行,我发现路径不存在。

这是由于目录之间的内部映射造成的。首先转到保存文件 (NYSE.csv) 的目录。 运行 命令:

df -k

您将获得目录的实际挂载点。例如:/xyz

现在,尝试在此安装点中找到您的文件 (NYSE.csv)。例如:/xyz/training/intel/NYSE.csv 并在您的代码中使用此路径。

val myrdd = sc.textfile("/xyz/training/intel/NYSE.csv");
 "/training/intel/NYSE.csv"

表示"start looking from the top level directory"。将其更改为 "/user/training/intel/NYSE.csv" 或只是 "training/intel/NYSE.csv"(无前导 /)以引用相对于当前目录的文件。

文件在 HDFS 中。

Spark 已配置为从您的本地文件系统读取 file:/

您需要编辑 Spark 安装目录中的 core-site.xml 文件,以确保 fs.defaultFS 设置正确以使用您的 Hadoop Namenode

InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).; Host Details : local host is: "hadoop/127.0.0.1"; destination host is: "localhost":8020;

这意味着您的 Spark HDFS 客户端与安装的 Hadoop 服务器 API 不兼容,或者您​​连接到错误的端口

而且,无论如何,文件不在 hdfs:///training/..

除此之外,HDFS 不是学习 Spark 所必需的,因此可以先尝试使用 hadoop fs 命令或将文件移动到本地系统,具体取决于您的目标