如何将 S3 Select 与制表符分隔的 csv 文件一起使用

How to use S3 Select with tab separated csv files

我正在使用此脚本查询保存在 AWS S3 存储桶上的 CSV 文件中的数据。它适用于最初以逗号分隔格式保存的 CSV 文件,但我有很多使用制表符分隔符 (Sep='\t') 保存的数据,这使得代码失败。

原始数据非常庞大,很难改写。有没有办法在我们为 CSV 文件指定 delimiter/separator 的地方查询数据?

我用的是这个post:https://towardsdatascience.com/how-i-improved-performance-retrieving-big-data-with-s3-select-2bd2850bc428 ...我要感谢作者的教程,它帮助我节省了很多时间。

代码如下:

import boto3
import os
import pandas as pd

S3_KEY = r'source/df.csv'
S3_BUCKET = 'my_bucket'
TARGET_FILE = 'dataset.csv'

aws_access_key_id= 'my_key'
aws_secret_access_key= 'my_secret'

s3_client = boto3.client(service_name='s3',
                         region_name='us-east-1',
                         aws_access_key_id=aws_access_key_id,
        aws_secret_access_key=aws_secret_access_key)

query = """SELECT column1
        FROM S3Object
        WHERE column1 = '4223740573'"""

result = s3_client.select_object_content(Bucket=S3_BUCKET,
                                         Key=S3_KEY,
                                         ExpressionType='SQL',
                                         Expression=query,
                                         InputSerialization={'CSV': {'FileHeaderInfo': 'Use'}},
                                         OutputSerialization={'CSV': {}})

# remove the file if exists, since we append filtered rows line by line
if os.path.exists(TARGET_FILE):
    os.remove(TARGET_FILE)

with open(TARGET_FILE, 'a+') as filtered_file:
    # write header as a first line, then append each row from S3 select
    filtered_file.write('Column1\n')
    for record in result['Payload']:
        if 'Records' in record:
            res = record['Records']['Payload'].decode('utf-8')
            filtered_file.write(res)
result = pd.read_csv(TARGET_FILE)

InputSerialization 选项还允许您指定:

RecordDelimiter - A single character used to separate individual records in the input. Instead of the default value, you can specify an arbitrary delimiter.

所以你可以试试:

result = s3_client.select_object_content(
    Bucket=S3_BUCKET,
    Key=S3_KEY,
    ExpressionType='SQL',
    Expression=query,
    InputSerialization={'CSV': {'FileHeaderInfo': 'Use', 'RecordDelimiter': '\t'}},
    OutputSerialization={'CSV': {}})