从之前通过 saveAsTable 写入的磁盘加载 spark bucketed table
Load spark bucketed table from disk previously written via saveAsTable
版本:DBR 8.4 |火花 3.1.2
Spark 允许我创建一个桶状配置单元 table 并将其保存到我选择的位置。
df_data_bucketed = (df_data.write.mode('overwrite').bucketBy(9600, 'id').sortBy('id')
.saveAsTable('data_bucketed', format='parquet', path=bucketed_path)
)
我已经验证这会将 table 数据保存到我指定的路径(在我的例子中是 blob 存储)。
将来,table 'data_bucketed' 可能会从我的 spark 目录中删除,或映射到其他内容,我想使用之前写入的数据“重新创建它”到 blob,但我找不到加载预先存在的、已经存储的 spark table.
的方法
唯一似乎有效的是
df_data_bucketed = (spark.read.format("parquet").load(bucketed_path)
.write.mode('overwrite').bucketBy(9600, 'id').sortBy('id')
.saveAsTable('data_bucketed', format='parquet', path=bucketed_path)
)
这似乎没有意义,因为它本质上是从磁盘加载数据并不必要地用完全相同的数据覆盖它只是为了利用存储桶。 (由于此数据的大小,它也很慢)
您可以使用 spark SQL 在目录中创建 table
spark.sql("""CREATE TABLE IF NOT EXISTS tbl...""")
之后你可以通过 运行 spark.sql("MSCK REPAIR TABLE tbl")
告诉 spark 重新发现数据
我在 https://www.programmerall.com/article/3196638561/
找到了答案
Read from the saved Parquet file If you want to use historically saved data, you can't use the above method, nor can you use
spark.read.parquet() like reading regular files. The data read in this
way does not carry bucket information. The correct way is to use the
CREATE TABLE statement. For details, refer
to https://docs.databricks.com/spark/latest/spark-sql/language-manual/create-table.html
CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
[(col_name1 col_type1 [COMMENT col_comment1], ...)]
USING data_source
[OPTIONS (key1=val1, key2=val2, ...)]
[PARTITIONED BY (col_name1, col_name2, ...)]
[CLUSTERED BY (col_name3, col_name4, ...) INTO num_buckets BUCKETS]
[LOCATION path]
[COMMENT table_comment]
[TBLPROPERTIES (key1=val1, key2=val2, ...)]
[AS select_statement]
Examples are as follows:
spark.sql(
"""
|CREATE TABLE bucketed
| (name string)
| USING PARQUET
| CLUSTERED BY (name) INTO 10 BUCKETS
| LOCATION '/path/to'
|""".stripMargin)
版本:DBR 8.4 |火花 3.1.2
Spark 允许我创建一个桶状配置单元 table 并将其保存到我选择的位置。
df_data_bucketed = (df_data.write.mode('overwrite').bucketBy(9600, 'id').sortBy('id')
.saveAsTable('data_bucketed', format='parquet', path=bucketed_path)
)
我已经验证这会将 table 数据保存到我指定的路径(在我的例子中是 blob 存储)。
将来,table 'data_bucketed' 可能会从我的 spark 目录中删除,或映射到其他内容,我想使用之前写入的数据“重新创建它”到 blob,但我找不到加载预先存在的、已经存储的 spark table.
的方法唯一似乎有效的是
df_data_bucketed = (spark.read.format("parquet").load(bucketed_path)
.write.mode('overwrite').bucketBy(9600, 'id').sortBy('id')
.saveAsTable('data_bucketed', format='parquet', path=bucketed_path)
)
这似乎没有意义,因为它本质上是从磁盘加载数据并不必要地用完全相同的数据覆盖它只是为了利用存储桶。 (由于此数据的大小,它也很慢)
您可以使用 spark SQL 在目录中创建 table
spark.sql("""CREATE TABLE IF NOT EXISTS tbl...""")
之后你可以通过 运行 spark.sql("MSCK REPAIR TABLE tbl")
我在 https://www.programmerall.com/article/3196638561/
找到了答案Read from the saved Parquet file If you want to use historically saved data, you can't use the above method, nor can you use spark.read.parquet() like reading regular files. The data read in this way does not carry bucket information. The correct way is to use the CREATE TABLE statement. For details, refer to https://docs.databricks.com/spark/latest/spark-sql/language-manual/create-table.html
CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
[(col_name1 col_type1 [COMMENT col_comment1], ...)]
USING data_source
[OPTIONS (key1=val1, key2=val2, ...)]
[PARTITIONED BY (col_name1, col_name2, ...)]
[CLUSTERED BY (col_name3, col_name4, ...) INTO num_buckets BUCKETS]
[LOCATION path]
[COMMENT table_comment]
[TBLPROPERTIES (key1=val1, key2=val2, ...)]
[AS select_statement]
Examples are as follows:
spark.sql(
"""
|CREATE TABLE bucketed
| (name string)
| USING PARQUET
| CLUSTERED BY (name) INTO 10 BUCKETS
| LOCATION '/path/to'
|""".stripMargin)