为什么 Spark 会因 "value rdf is not a member of org.apache.spark.sql.SparkSession" 而失败?
Why does Spark fail with "value rdf is not a member of org.apache.spark.sql.SparkSession"?
我正在尝试使用 SANSA-RDF 将 turtle RDF 文件读入 Spark 并创建图表。执行以下代码时出现错误。我错过了什么?
import org.apache.jena.query.QueryFactory
import org.apache.jena.riot.Lang
import org.apache.spark.sql.SparkSession
import net.sansa_stack.rdf.spark.io.rdf._
import net.sansa_stack.rdf.spark.io._
import scala.io.Source
object SparkExecutor {
private var ss:SparkSession = null
def ConfigureSpark(): Unit ={
ss = SparkSession.builder
.master("local[*]")
.config("spark.driver.cores", 1)
.appName("LAM")
.getOrCreate()
}
def createGraph(): Unit ={
val filename = "xyz.ttl"
print("Loading graph from file"+ filename)
val lang = Lang.TTL
val triples = ss.rdf(lang)(filename)
val graph = LoadGraph(triples)
}
}
我正在使用
从主函数调用 SparkExecutor
object main {
def main(args: Array[String]): Unit = {
SparkExecutor.ConfigureSpark()
val RDFGraph = SparkExecutor.createGraph()
}
}
这会导致以下错误
Error: value rdf is not a member of org.apache.spark.sql.SparkSession
val triples = ss.rdf(lang)
好吧,如果你在
中看到 SANSA-RDF 源代码,那么这里有一个隐式转换
sansa-rdf-spark/src/main/scala/net/sansa_stack/rdf/spark/io/package.scala:159
rdf(lang) 不是spark session的方法,而是implicit class RDFReader的方法,所以需要import有隐式定义的包。请尝试添加
import net.sansa_stack.rdf.spark.io._
让我们知道结果。
我正在尝试使用 SANSA-RDF 将 turtle RDF 文件读入 Spark 并创建图表。执行以下代码时出现错误。我错过了什么?
import org.apache.jena.query.QueryFactory
import org.apache.jena.riot.Lang
import org.apache.spark.sql.SparkSession
import net.sansa_stack.rdf.spark.io.rdf._
import net.sansa_stack.rdf.spark.io._
import scala.io.Source
object SparkExecutor {
private var ss:SparkSession = null
def ConfigureSpark(): Unit ={
ss = SparkSession.builder
.master("local[*]")
.config("spark.driver.cores", 1)
.appName("LAM")
.getOrCreate()
}
def createGraph(): Unit ={
val filename = "xyz.ttl"
print("Loading graph from file"+ filename)
val lang = Lang.TTL
val triples = ss.rdf(lang)(filename)
val graph = LoadGraph(triples)
}
}
我正在使用
从主函数调用 SparkExecutor object main {
def main(args: Array[String]): Unit = {
SparkExecutor.ConfigureSpark()
val RDFGraph = SparkExecutor.createGraph()
}
}
这会导致以下错误
Error: value rdf is not a member of org.apache.spark.sql.SparkSession
val triples = ss.rdf(lang)
好吧,如果你在
中看到 SANSA-RDF 源代码,那么这里有一个隐式转换sansa-rdf-spark/src/main/scala/net/sansa_stack/rdf/spark/io/package.scala:159
rdf(lang) 不是spark session的方法,而是implicit class RDFReader的方法,所以需要import有隐式定义的包。请尝试添加
import net.sansa_stack.rdf.spark.io._
让我们知道结果。