调用Spark DataFrame.saveAsParquetFile()时删除google存储目录出现"already exists"

Deleted google storage directory appears "already exists" when calling Spark DataFrame.saveAsParquetFile()

我通过Google Cloud Console删除了一个Google Cloud Storage目录后,(该目录是早期Spark(ver 1.3.1)job生成的),当重新运行 工作,它总是失败,而且目录似乎仍然存在于工作中;我找不到 gsutil 的目录。

这是一个错误,还是我遗漏了什么?谢谢!

我得到的错误:

java.lang.RuntimeException: path gs://<my_bucket>/job_dir1/output_1.parquet already exists.
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.parquet.DefaultSource.createRelation(newParquet.scala:112)
at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:240)
at org.apache.spark.sql.DataFrame.save(DataFrame.scala:1196)
at org.apache.spark.sql.DataFrame.saveAsParquetFile(DataFrame.scala:995)
at com.xxx.Job1$.execute(Job1.scala:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

看来您可能 运行 陷入 NFS 列表一致性缓存的已知错误:https://github.com/GoogleCloudPlatform/bigdata-interop/issues/5

它已在最新版本中修复,如果您通过使用 bdutil-1.3.1(此处宣布:https://groups.google.com/forum/#!topic/gcp-hadoop-announce/vstNuV0LpDc)部署新集群进行升级,该问题应该已修复。如果您需要就地升级,您可以尝试将最新的 gcs-connector-1.4.1 jarfile 下载到 /home/hadoop/hadoop-install/lib/gcs-connector-*.jar 下的主节点和工作节点,然后重新启动 Spark 守护进程:

sudo sudo -u hadoop /home/hadoop/spark-install/sbin/stop-all.sh
sudo sudo -u hadoop /home/hadoop/spark-install/sbin/start-all.sh