Spark-xml 在读取处理指令时崩溃

Spark-xml crashes on reading processing instructions

我正在尝试使用 Databricks spark-xml 包将 XML 文件读入 Spark 数据帧。但是,当它遇到处理指令时,Spark 会引发一个错误,声称发生了意外事件。

我正在尝试将 XML 文件导入数据帧,然后我可以将其处理为平面文件以写入 CSV。数据集足够大,我们需要某种处理程序,例如 Spark。我查看了 spark-xml 文档,但找不到任何关于处理指令的提及。我实际上不需要说明中的任何信息,所以如果可以的话,我很乐意忽略它们,但实际上它们会干扰整个文件。如有任何建议,我们将不胜感激。

这是重现问题的 XML 片段:

<?xml version="1.0" encoding="UTF-8"?>
<row>
<description>
<?issue?>
<text>foo</text>
</description>
</row>

以下是我尝试阅读 Python 中的 XML 的方式:

sc = SparkContext()
sql = SQLContext(sc)
xml = sql.read.format("com.databricks.spark.xml").option("rowTag", "row").load("example.xml")

为了完整起见,下面是我加载数据块并将脚本提交给 Spark 的方式:

spark-submit --packages com.databricks:spark-csv_2.11:1.5.0,com.databricks:spark-xml_2.10:0.4.1 example.py

当我尝试使用上面的代码读取 XML 时,Spark 引发了一个异常,声称 "unexpected event." 在下面找到确切的错误消息。

2019-08-20 13:47:03 ERROR Executor:91 - Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.RuntimeException: Failed to parse data with unexpected event <?issue ?>
    at scala.sys.package$.error(package.scala:27)
    at com.databricks.spark.xml.util.InferSchema$.inferField(InferSchema.scala:151)
    at com.databricks.spark.xml.util.InferSchema$.com$databricks$spark$xml$util$InferSchema$$inferObject(InferSchema.scala:178)
    at com.databricks.spark.xml.util.InferSchema$$anonfun$$anonfun$apply.apply(InferSchema.scala:101)
    at com.databricks.spark.xml.util.InferSchema$$anonfun$$anonfun$apply.apply(InferSchema.scala:89)
    at scala.collection.Iterator$$anon.nextCur(Iterator.scala:434)
    at scala.collection.Iterator$$anon.hasNext(Iterator.scala:440)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
    at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:214)
    at scala.collection.AbstractIterator.aggregate(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1139)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1139)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1140)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1140)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$$anonfun$apply.apply(RDD.scala:800)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$$anonfun$apply.apply(RDD.scala:800)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)
2019-08-20 13:47:03 WARN  TaskSetManager:66 - Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.RuntimeException: Failed to parse data with unexpected event <?issue ?>
    at scala.sys.package$.error(package.scala:27)
    at com.databricks.spark.xml.util.InferSchema$.inferField(InferSchema.scala:151)
    at com.databricks.spark.xml.util.InferSchema$.com$databricks$spark$xml$util$InferSchema$$inferObject(InferSchema.scala:178)
    at com.databricks.spark.xml.util.InferSchema$$anonfun$$anonfun$apply.apply(InferSchema.scala:101)
    at com.databricks.spark.xml.util.InferSchema$$anonfun$$anonfun$apply.apply(InferSchema.scala:89)
    at scala.collection.Iterator$$anon.nextCur(Iterator.scala:434)
    at scala.collection.Iterator$$anon.hasNext(Iterator.scala:440)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
    at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:214)
    at scala.collection.AbstractIterator.aggregate(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1139)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1139)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1140)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1140)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$$anonfun$apply.apply(RDD.scala:800)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$$anonfun$apply.apply(RDD.scala:800)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)

2019-08-20 13:47:03 ERROR TaskSetManager:70 - Task 0 in stage 0.0 failed 1 times; aborting job
Traceback (most recent call last):
  File "/oak/stanford/groups/hlwill/gsmoore/projects/parser_new/Whosebug/example.py", line 10, in <module>
    xml = sql.read.format("com.databricks.spark.xml").option("rowTag", "row").load("example.xml")
  File "/share/software/user/open/spark/2.3.0/python/pyspark/sql/readwriter.py", line 166, in load
    return self._df(self._jreader.load(path))
  File "/share/software/user/open/spark/2.3.0/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in __call__
  File "/share/software/user/open/spark/2.3.0/python/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/share/software/user/open/spark/2.3.0/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py", line 320, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o27.load.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.RuntimeException: Failed to parse data with unexpected event <?issue ?>
    at scala.sys.package$.error(package.scala:27)
    at com.databricks.spark.xml.util.InferSchema$.inferField(InferSchema.scala:151)
    at com.databricks.spark.xml.util.InferSchema$.com$databricks$spark$xml$util$InferSchema$$inferObject(InferSchema.scala:178)
    at com.databricks.spark.xml.util.InferSchema$$anonfun$$anonfun$apply.apply(InferSchema.scala:101)
    at com.databricks.spark.xml.util.InferSchema$$anonfun$$anonfun$apply.apply(InferSchema.scala:89)
    at scala.collection.Iterator$$anon.nextCur(Iterator.scala:434)
    at scala.collection.Iterator$$anon.hasNext(Iterator.scala:440)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
    at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:214)
    at scala.collection.AbstractIterator.aggregate(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1139)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1139)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1140)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1140)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$$anonfun$apply.apply(RDD.scala:800)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$$anonfun$apply.apply(RDD.scala:800)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1587)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage.apply(DAGScheduler.scala:1586)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:831)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed.apply(DAGScheduler.scala:831)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758)
    at org.apache.spark.util.EventLoop$$anon.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2124)
    at org.apache.spark.rdd.RDD$$anonfun$fold.apply(RDD.scala:1092)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.RDD.fold(RDD.scala:1086)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate.apply(RDD.scala:1155)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.RDD.treeAggregate(RDD.scala:1131)
    at com.databricks.spark.xml.util.InferSchema$.infer(InferSchema.scala:109)
    at com.databricks.spark.xml.XmlRelation$$anonfun.apply(XmlRelation.scala:46)
    at com.databricks.spark.xml.XmlRelation$$anonfun.apply(XmlRelation.scala:46)
    at scala.Option.getOrElse(Option.scala:121)
    at com.databricks.spark.xml.XmlRelation.<init>(XmlRelation.scala:45)
    at com.databricks.spark.xml.DefaultSource.createRelation(DefaultSource.scala:65)
    at com.databricks.spark.xml.DefaultSource.createRelation(DefaultSource.scala:43)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:340)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:174)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Failed to parse data with unexpected event <?issue ?>
    at scala.sys.package$.error(package.scala:27)
    at com.databricks.spark.xml.util.InferSchema$.inferField(InferSchema.scala:151)
    at com.databricks.spark.xml.util.InferSchema$.com$databricks$spark$xml$util$InferSchema$$inferObject(InferSchema.scala:178)
    at com.databricks.spark.xml.util.InferSchema$$anonfun$$anonfun$apply.apply(InferSchema.scala:101)
    at com.databricks.spark.xml.util.InferSchema$$anonfun$$anonfun$apply.apply(InferSchema.scala:89)
    at scala.collection.Iterator$$anon.nextCur(Iterator.scala:434)
    at scala.collection.Iterator$$anon.hasNext(Iterator.scala:440)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
    at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:214)
    at scala.collection.AbstractIterator.aggregate(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1139)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1139)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1140)
    at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$$anonfun.apply(RDD.scala:1140)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$$anonfun$apply.apply(RDD.scala:800)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$$anonfun$apply.apply(RDD.scala:800)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    ... 1 more

最终弄明白了——原来我一直在使用 spark-xml 的过时版本。至少目前,加载数据块包的正确方法如下:

spark-submit --packages com.databricks:spark-csv_2.11:1.5.0,com.databricks:spark-xml_2.11:0.6.0 example.py

这样有两件事是正确的:

  1. 所有包都运行在同一个 Scala 版本 2.11 中(应该与 运行 Spark 所使用的版本相匹配)。您可以通过键入 spark-shell --version 查看您 运行ning 的 Spark 版本。
  2. 我根据每个包的 github 使用最新版本。