将数据帧写入 csv 时出现 windows 错误的 pyspark

pypsark on windows error while writing data frame to csv

我正在尝试在 windows 10 机器和 pycharm 上为 py-spark 设置本地开发环境。到目前为止,我能够阅读各种来源并进行转换。但是当我尝试使用 df.write() 将转换后的数据写入本地系统时,它失败并出现以下错误。

我尝试了关于这个话题的各种答案,但所有这些都像是在黑暗中射击。因为对一个用户有效的方法对其他用户无效。 我在各自的文件夹中有 winunitl.exe 和 hadoop.dll。任何有助于理解和解决此问题的帮助都会很棒。

这个错误可以在我的机器上使用下面的代码重现,我检查了 pyspark shell 并且我也收到了这个错误:

from pyspark.sql.types import IntegerType
my_list = [1, 2, 3]
df = spark.createDataFrame(my_list, IntegerType())
df.show()
df.write.csv("mypath")

此代码能够显示数据框并在写入路径中创建一个目录,但不会在那里写入任何内容。

Loading target table
Traceback (most recent call last):
  File "E:\pyspark_boilerlpat_beginners\pipeline.py", line 35, in <module>
    pipeline.run_pipeline()
  File "E:\pyspark_boilerlpat_beginners\pipeline.py", line 25, in run_pipeline
    load_process.load_target(transformed_df)
  File "E:\pyspark_boilerlpat_beginners\load.py", line 17, in load_target
    df.write.partitionBy("workclass", "race", "sex").mode("Overwrite").option("header", "true").csv("./_data/transformed_salary_csv/")
  File "C:\spark3\python\pyspark\sql\readwriter.py", line 1372, in csv
    self._jwrite.csv(path)
  File "C:\Python\lib\site-packages\py4j\java_gateway.py", line 1304, in __call__
    return_value = get_return_value(
  File "C:\spark3\python\pyspark\sql\utils.py", line 111, in deco
    return f(*a, **kw)
  File "C:\Python\lib\site-packages\py4j\protocol.py", line 326, in get_return_value
    raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o41.csv.
: ExitCodeException exitCode=-1073741515: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
    at org.apache.hadoop.util.Shell.run(Shell.java:901)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1213)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:1307)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:1289)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:865)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:547)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:587)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:559)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:586)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:559)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:586)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:559)
    at org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:705)
    at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:354)
    at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupJob(HadoopMapReduceCommitProtocol.scala:178)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:173)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
    at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute(SparkPlan.scala:180)
    at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery(SparkPlan.scala:218)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:131)
    at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand(DataFrameWriter.scala:989)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId(SQLExecution.scala:103)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId(SQLExecution.scala:90)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
    at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
    at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
    at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
    at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:979)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Unknown Source)

我已经尝试按照其他答案中提到的建议进行操作,但没有奏效。

  1. 写入较少的数据量
  2. 将 hadoop.dll 放入 Windows/system32
  3. 更换 winutil.exe 因为那可能有问题
  4. Hadoop 和 Spark 路径设置正确。
  5. TEMP 和 TMP 路径根据系统设置设置。
  6. 更新了 microsoft visual c++ 并安装了 x86 系统,还没有成功。

在浪费了宝贵的时间后,我终于找到了问题所在。 Windows mac 真让人头疼。在 mac 上它就像魅力一样。
这是 Windows 的问题。程序无法启动,因为计算机中缺少 MSVCP100.dll。重新安装程序将解决此问题。

我需要安装 VC++ 可再发行组件包 2010 版。

从 Microsoft 官方下载中心下载 Microsoft Visual C++ 2010 Service Pack 1 Redistributable Package。从 here Redistributable (vcredist_x64.exe) 安装 Microsoft Visual C++ 2010 x64 解决了这个问题。

https://www.microsoft.com/en-au/download/details.aspx?id=26999