Spark SQL 安全注意事项

Spark SQL security considerations

接受和执行任意 spark SQL 查询时有哪些安全注意事项?

设想以下设置:

hdfs 上的两个文件注册为表 a_secretsb_secrets:

# must only be accessed by clients with access to all of customer a' data
spark.read.csv("/customer_a/secrets.csv").createTempView("a_secrets")

# must only be accessed by clients with access to all of customer b's data
spark.read.csv("/customer_b/secrets.csv").createTempView("b_secrets")

这两个视图,我可以使用简单的 hdfs 文件权限来保护。但是假设我对这些表有以下逻辑视图,我想公开这些视图:

# only access for clients with access to customer a's account no 1
spark.sql("SELECT * FROM a_secrets WHERE account = 1").createTempView("a1_secrets")

# only access for clients with access to customer a's account no 2
spark.sql("SELECT * FROM a_secrets WHERE account = 2").createTempView("a2_secrets")


# only access for clients with access to customer b's account no 1
spark.sql("SELECT * FROM b_secrets WHERE account = 1").createTempView("b1_secrets")

# only access for clients with access to customer b's account no 2
spark.sql("SELECT * FROM b_secrets WHERE account = 2").createTempView("b2_secrets")

现在假设我收到一个任意 (user, pass, query) 集。我得到了用户可以访问的帐户列表:

groups = get_groups(user, pass)

并提取用户查询的逻辑查询计划:

spark.sql(query).explain(true)

按照以下方式给我一个查询计划(这个确切的查询计划是编造的)

== Analyzed Logical Plan ==
account: int, ... more fields
Project [account#0 ... more fields]
+- SubqueryAlias a1_secrets
   +- Relation [... more fields]
      +- Join Inner, (some_col#0 = another_col#67)
         :- SubqueryAlias a2_secrets
         :  +- Relation[... more fields] csv
== Physical Plan ==
... InputPaths: hdfs:/customer_a/secrets.csv ...

假设我可以解析一个逻辑查询计划来准确地确定正在访问哪些表和文件,授予对查询生成的数据的访问权限是否安全?我在考虑潜在的问题,例如:

总结一下:我可以安全地接受任意 SQL,使用 df = spark.sql(1) 注册它,使用 df.explain(True) 分析数据访问,然后使用 return 结果使用例如df.collect()?

编辑: - 23 Jan 15:29:编辑后在

中包含一个 "EXPLAIN" 前缀

TL;DR 你不应该在你的 Spark 集群上执行任何不受信任的代码。

Are the ways to load new data and register it as tables through pure spark SQL?

CREATE TABLE 可以使用 sql 方法执行,因此只要用户有权访问文件系统,他们就可以创建表。

Are there ways to register UDFs/execute arbitrary code purely through spark.sql(1)?

,只要能控制classpath在哪,都可以用SQL修改。

spark.sql("""add jar URI""")

Do users have access to any sql functions with side effects (that modifies or accesses unathorized data)?

有效(通过扩展前一点)。

Can I safely accept arbitrary SQL,

没有.