pySpark 中的 Vertica 数据抛出 "Failed to find data source"

Vertica data into pySpark throws "Failed to find data source"

我有 spark 3.2,vertica 9.2。

spark = SparkSession.builder.appName("Ukraine").master("local[*]")\
.config("spark.jars", '/home/shivamanand/spark-3.2.1-bin-hadoop3.2/jars/vertica-jdbc-9.2.1-0.jar')\
.config("spark.jars", '/home/shivamanand/spark-3.2.1-bin-hadoop3.2/jars/vertica-spark-3.2.1.jar')\
.getOrCreate()

table = "test"
db = "myDB"
user = "myUser"
password = "myPassword"
host = "myVerticaHost"
part = "12";

opt = {"host" : host, "table" : table, "db" : db, "numPartitions" : part, "user" : user, "password" : password}

df = spark.read.format("com.vertica.spark.datasource.DefaultSource").options().load()

给予

Py4JJavaError: An error occurred while calling o77.load.
: java.lang.ClassNotFoundException: 
Failed to find data source: com.vertica.spark.datasource.DefaultSource. Please find packages at
http://spark.apache.org/third-party-projects.html

~/shivamenv/venv/lib/python3.7/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
    109     def deco(*a, **kw):
    110         try:
--> 111             return f(*a, **kw)
    112         except py4j.protocol.Py4JJavaError as e:
    113             converted = convert_exception(e.java_exception)

~/shivamenv/venv/lib/python3.7/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

在此步骤之前,我将 wget 2 个罐子放入 spark 罐子文件夹(sparksession 配置中的罐子)

我从

获得的

https://libraries.io/maven/com.vertica.spark:vertica-spark https://www.vertica.com/download/vertica/client-drivers/

不确定我做错了什么,是否有火花罐选项的替代方案?

在下面link-

https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/SparkConnector/GettingTheSparkConnector.htm?tocpath=Integrating%20with%20Apache%20Spark%7C_____1

他们提到

Both of these libraries are installed with the Vertica server and are available on all nodes in the Vertica cluster in the following locations:

The Spark Connector files are located in /opt/vertica/packages/SparkConnector/lib. The JDBC client library is /opt/vertica/java/vertica-jdbc.jar

是否应该尝试用这些替换本地文件夹 jar?

无需替换本地文件夹jar。将它们复制到 spark 集群后,您将使用以下选项执行 运行 spark-shell 命令。请在旁注中找到示例 below.Once,vertica 官方仅支持 vertica 9.2 版本的 spark 2.x。希望对您有所帮助。

https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/SupportedPlatforms/SparkIntegration.htm

spark-shell --jars vertica-spark2.1_scala2.11.jar,vertica-jdbc-9.2.1-11.jar

日期18:26:35 警告 NativeCodeLoader:无法为您的平台加载 native-hadoop 库...在适用的情况下使用 builtin-java 类 使用 Spark 的默认 log4j 配置文件:org/apache/spark/log4j-defaults.properties 将默认日志级别设置为“WARN”。 要调整日志记录级别,请使用 sc.setLogLevel(newLevel)。对于 SparkR,使用 setLogLevel(newLevel)。 Spark 上下文 Web UI 可在 http://dholmes14:4040 获得 Spark 上下文可用 'sc'(master = local[*],app id = local-1597170403068)。 Spark 会话可用 'spark'。 欢迎来到


//___// \ / _ / _ `/ __/ '/ // .__/_,// //_\ 版本 2.4.6 //

使用 Scala 版本 2.11.12(OpenJDK 64 位服务器虚拟机,Java 1.8.0_252) 键入表达式以评估它们。 输入 :help 获取更多信息。

scala> 导入 org.apache.spark.sql.SparkSession

进口org.apache.spark.storage._

val df1 = spark.read.format("com.vertica.spark.datasource.DefaultSource").option("host", "").option("port", 5433).option("db", "" ).option("user", "dbadmin").option("dbschema", "").option("table", "").option("numPartitions", 3).option("LogLevel ", "DEBUG").load()

val df2 = df1.filter(column_name 在 800055 和 8000126 之间").groupBy("column1", "column2")

spark.time(df2.show())