awswrangler.s3.read_parquet 忽略 partition_filter 参数
awswrangler.s3.read_parquet ignores partition_filter argument
wr.s3.read_parquet()
中的 partition_filter
参数未能过滤 S3 上的分区镶木地板数据集。这是一个可重现的示例(可能需要正确配置的 boto3_session
参数):
数据集设置:
import pandas as pd
import awswrangler as wr
import boto3
s3_path = "s3://bucket-name/folder"
df = pd.DataFrame({"val": [1,3,2,5], "date": ['2021-04-01','2021-04-01','2021-04-02','2021-04-03']})
wr.s3.to_parquet(
df = df,
path = s3_path,
dataset = True,
partition_cols = ['date']
)
#> {'paths': ['s3://bucket-name/folder/date=2021-04-01/38399541e6fe4fa7866181479dd28e8e.snappy.parquet',
#> 's3://bucket-name/folder/date=2021-04-02/0a556212b5f941c7aa3c3775d2387419.snappy.parquet',
#> 's3://bucket-name/folder/date=2021-04-03/cb71397bea104787a50a90b078d564bd.snappy.parquet'],
#> 'partitions_values': {'s3://aardvark-gdelt/headlines/date=2021-04-01/': ['2021-04-01'],
#> 's3://bucket-name/folder/date=2021-04-02/': ['2021-04-02'],
#> 's3://bucket-name/folder/date=2021-04-03/': ['2021-04-03']}}
然后可以在控制台中查看 S3 数据:
但使用日期过滤器回读 returns 4 条记录:
wr.s3.read_parquet(path = s3_path,
partition_filter = lambda x: x["date"] >= "2021-04-02"
)
#> val
#> 0 1
#> 1 3
#> 2 2
#> 3 5
实际上子 lambda x: False
仍然是 returns 4 行。我错过了什么?这是来自 the guidance:
partition_filter (Optional[Callable[[Dict[str, str]], bool]]) –
Callback Function filters to apply on PARTITION columns (PUSH-DOWN
filter). This function MUST receive a single argument (Dict[str, str])
where keys are partitions names and values are partitions values.
Partitions values will be always strings extracted from S3. This
function MUST return a bool, True to read the partition or False to
ignore it. Ignored if dataset=False. E.g lambda x: True if x["year"]
== "2020" and x["month"] == "1" else False
我注意到返回的数据帧不包括上传数据中的分区 'date' 列 - 在文档中看不到对此删除的引用,并且不清楚是否相关。
根据文档,Ignored if dataset=False.
。将 dataset=True
作为参数添加到您的 read_parquet
调用中就可以了
wr.s3.read_parquet()
中的 partition_filter
参数未能过滤 S3 上的分区镶木地板数据集。这是一个可重现的示例(可能需要正确配置的 boto3_session
参数):
数据集设置:
import pandas as pd
import awswrangler as wr
import boto3
s3_path = "s3://bucket-name/folder"
df = pd.DataFrame({"val": [1,3,2,5], "date": ['2021-04-01','2021-04-01','2021-04-02','2021-04-03']})
wr.s3.to_parquet(
df = df,
path = s3_path,
dataset = True,
partition_cols = ['date']
)
#> {'paths': ['s3://bucket-name/folder/date=2021-04-01/38399541e6fe4fa7866181479dd28e8e.snappy.parquet',
#> 's3://bucket-name/folder/date=2021-04-02/0a556212b5f941c7aa3c3775d2387419.snappy.parquet',
#> 's3://bucket-name/folder/date=2021-04-03/cb71397bea104787a50a90b078d564bd.snappy.parquet'],
#> 'partitions_values': {'s3://aardvark-gdelt/headlines/date=2021-04-01/': ['2021-04-01'],
#> 's3://bucket-name/folder/date=2021-04-02/': ['2021-04-02'],
#> 's3://bucket-name/folder/date=2021-04-03/': ['2021-04-03']}}
然后可以在控制台中查看 S3 数据:
但使用日期过滤器回读 returns 4 条记录:
wr.s3.read_parquet(path = s3_path,
partition_filter = lambda x: x["date"] >= "2021-04-02"
)
#> val
#> 0 1
#> 1 3
#> 2 2
#> 3 5
实际上子 lambda x: False
仍然是 returns 4 行。我错过了什么?这是来自 the guidance:
partition_filter (Optional[Callable[[Dict[str, str]], bool]]) – Callback Function filters to apply on PARTITION columns (PUSH-DOWN filter). This function MUST receive a single argument (Dict[str, str]) where keys are partitions names and values are partitions values. Partitions values will be always strings extracted from S3. This function MUST return a bool, True to read the partition or False to ignore it. Ignored if dataset=False. E.g lambda x: True if x["year"] == "2020" and x["month"] == "1" else False
我注意到返回的数据帧不包括上传数据中的分区 'date' 列 - 在文档中看不到对此删除的引用,并且不清楚是否相关。
根据文档,Ignored if dataset=False.
。将 dataset=True
作为参数添加到您的 read_parquet
调用中就可以了