spark_apply 无法 运行 程序“Rscript”:在目录“C:\Users\username\AppData\Local\spark\spark-2.3.3-bin-hadoop2.7\tmp\local\spark-..\userFiles

spark_apply Cannot run program “Rscript”: in directory "C:\Users\username\AppData\Local\spark\spark-2.3.3-bin-hadoop2.7\tmp\local\spark-..\userFiles

遵循本书的第一个说明 "Mastering Apache Spark with R" 关于 spark_apply,在 windows 下的本地集群上并使用 RGui, 发射:

install.packages("sparklyr")
install.packages("pkgconfig")
spark_install("2.3")
Installing Spark 2.3.3 for Hadoop 2.7 or later.
spark_installed_versions()
library(dplyr,sparklyr)
sc <- spark_connect(master = "local", version = "2.3.3")
cars <- copy_to(sc, mtcars)    
cars %>% spark_apply(~round(.x))

返回以下错误:

spark_apply Cannot run program “Rscript”:  in directory "C:\Users\username\AppData\Local\spark\spark-2.3.3-bin-hadoop2.7\tmp\local\spark-..\userFiles-..  
CreateProcess error=2, The file specified can't be found

如何正确安装sparklyr和 如何摆脱这个错误?

spark 节点需要在其路径中包含 Rscript 可执行文件。对于master节点,就是possible to set the path to the Rscript executable使用下面的命令:

config <- spark_config()
config[["spark.r.command"]] <- "d:/path/to/R-3.4.2/bin/Rscript.exe"
sc <- spark_connect(master = "local", config = config)

让我们找到 here 分布式环境的更多解释和指南。