如何有效地将大型数据帧拆分为多个镶木地板文件?
how to efficiently split a large dataframe into many parquet files?
考虑以下数据框
import pandas as pd
import numpy as np
import pyarrow.parquet as pq
import pyarrow as pa
idx = pd.date_range('2017-01-01 12:00:00.000', '2017-03-01 12:00:00.000', freq = 'T')
dataframe = pd.DataFrame({'numeric_col' : np.random.rand(len(idx)),
'string_col' : pd.util.testing.rands_array(8,len(idx))},
index = idx)
dataframe
Out[30]:
numeric_col string_col
2017-01-01 12:00:00 0.4069 wWw62tq6
2017-01-01 12:01:00 0.2050 SleB4f6K
2017-01-01 12:02:00 0.5180 cXBvEXdh
2017-01-01 12:03:00 0.3069 r9kYsJQC
2017-01-01 12:04:00 0.3571 F2JjUGgO
2017-01-01 12:05:00 0.3170 8FPC4Pgz
2017-01-01 12:06:00 0.9454 ybeNnZGV
2017-01-01 12:07:00 0.3353 zSLtYPWF
2017-01-01 12:08:00 0.8510 tDZJrdMM
2017-01-01 12:09:00 0.4948 S1Rm2Sqb
2017-01-01 12:10:00 0.0279 TKtmys86
2017-01-01 12:11:00 0.5709 ww0Pe1cf
2017-01-01 12:12:00 0.8274 b07wKPsR
2017-01-01 12:13:00 0.3848 9vKTq3M3
2017-01-01 12:14:00 0.6579 crYxFvlI
2017-01-01 12:15:00 0.6568 yGUnCW6n
我需要将此数据框写入许多镶木地板文件中。当然还有以下作品:
table = pa.Table.from_pandas(dataframe)
pq.write_table(table, '\\mypath\dataframe.parquet', flavor ='spark')
我的问题是生成的(单个)parquet
文件变得太大。
我如何有效地(内存方面,速度方面)将写作拆分为daily
镶木地板文件(并保持spark
风格) ?这些日常文件将在以后与 spark
并行阅读。
谢谢!
根据索引创建一个字符串列dt
将允许您写出按 运行
日期分区的数据
pq.write_to_dataset(table, root_path='dataset_name', partition_cols=['dt'], flavor ='spark')
答案基于此 source(注意,来源错误地将分区参数列为 partition_columns
)
David 提出的解决方案没有解决问题,因为它为每个索引生成一个 parquet 文件。但是这个稍微修改过的版本就可以了
import pandas as pd
import numpy as np
import pyarrow.parquet as pq
import pyarrow as pa
idx = pd.date_range('2017-01-01 12:00:00.000', '2017-03-01 12:00:00.000',
freq='T')
df = pd.DataFrame({'numeric_col': np.random.rand(len(idx)),
'string_col': pd.util.testing.rands_array(8,len(idx))},
index = idx)
df["dt"] = df.index
df["dt"] = df["dt"].dt.date
table = pa.Table.from_pandas(df)
pq.write_to_dataset(table, root_path='dataset_name', partition_cols=['dt'],
flavor='spark')
考虑以下数据框
import pandas as pd
import numpy as np
import pyarrow.parquet as pq
import pyarrow as pa
idx = pd.date_range('2017-01-01 12:00:00.000', '2017-03-01 12:00:00.000', freq = 'T')
dataframe = pd.DataFrame({'numeric_col' : np.random.rand(len(idx)),
'string_col' : pd.util.testing.rands_array(8,len(idx))},
index = idx)
dataframe
Out[30]:
numeric_col string_col
2017-01-01 12:00:00 0.4069 wWw62tq6
2017-01-01 12:01:00 0.2050 SleB4f6K
2017-01-01 12:02:00 0.5180 cXBvEXdh
2017-01-01 12:03:00 0.3069 r9kYsJQC
2017-01-01 12:04:00 0.3571 F2JjUGgO
2017-01-01 12:05:00 0.3170 8FPC4Pgz
2017-01-01 12:06:00 0.9454 ybeNnZGV
2017-01-01 12:07:00 0.3353 zSLtYPWF
2017-01-01 12:08:00 0.8510 tDZJrdMM
2017-01-01 12:09:00 0.4948 S1Rm2Sqb
2017-01-01 12:10:00 0.0279 TKtmys86
2017-01-01 12:11:00 0.5709 ww0Pe1cf
2017-01-01 12:12:00 0.8274 b07wKPsR
2017-01-01 12:13:00 0.3848 9vKTq3M3
2017-01-01 12:14:00 0.6579 crYxFvlI
2017-01-01 12:15:00 0.6568 yGUnCW6n
我需要将此数据框写入许多镶木地板文件中。当然还有以下作品:
table = pa.Table.from_pandas(dataframe)
pq.write_table(table, '\\mypath\dataframe.parquet', flavor ='spark')
我的问题是生成的(单个)parquet
文件变得太大。
我如何有效地(内存方面,速度方面)将写作拆分为daily
镶木地板文件(并保持spark
风格) ?这些日常文件将在以后与 spark
并行阅读。
谢谢!
根据索引创建一个字符串列dt
将允许您写出按 运行
pq.write_to_dataset(table, root_path='dataset_name', partition_cols=['dt'], flavor ='spark')
答案基于此 source(注意,来源错误地将分区参数列为 partition_columns
)
David 提出的解决方案没有解决问题,因为它为每个索引生成一个 parquet 文件。但是这个稍微修改过的版本就可以了
import pandas as pd
import numpy as np
import pyarrow.parquet as pq
import pyarrow as pa
idx = pd.date_range('2017-01-01 12:00:00.000', '2017-03-01 12:00:00.000',
freq='T')
df = pd.DataFrame({'numeric_col': np.random.rand(len(idx)),
'string_col': pd.util.testing.rands_array(8,len(idx))},
index = idx)
df["dt"] = df.index
df["dt"] = df["dt"].dt.date
table = pa.Table.from_pandas(df)
pq.write_to_dataset(table, root_path='dataset_name', partition_cols=['dt'],
flavor='spark')