如何使用 python (pyspark) 中的 spark 数据帧从 AWS S3 读取镶木地板文件
How to read parquet files from AWS S3 using spark dataframe in python (pyspark)
我正在尝试读取存储在 s3 存储桶中的一些镶木地板文件。我正在使用以下代码:
s3 = boto3.resource('s3')
# get a handle on the bucket that holds your file
bucket = s3.Bucket('bucket_name')
# get a handle on the object you want (i.e. your file)
obj = bucket.Object(key = 'file/key/083b661babc54dd89139449d15fa22dd.snappy.parquet')
# get the object
response = obj.get()
# read the contents of the file and split it into a list of lines
lines = response[u'Body'].read().split('\n')
尝试执行最后一行代码时 lines = response[u'Body'].read().split('\n')
我收到以下错误:
TypeError: a bytes-like object is required, not 'str'
我不太确定如何解决这个问题。
我不得不使用以下 code:
而不是 boto3
myAccessKey = 'your key'
mySecretKey = 'your key'
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.amazonaws:aws-java-sdk:1.10.34,org.apache.hadoop:hadoop-aws:2.6.0 pyspark-shell'
import pyspark
sc = pyspark.SparkContext("local[*]")
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
hadoopConf = sc._jsc.hadoopConfiguration()
hadoopConf.set("fs.s3.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
hadoopConf.set("fs.s3.awsAccessKeyId", myAccessKey)
hadoopConf.set("fs.s3.awsSecretAccessKey", mySecretKey)
df = sqlContext.read.parquet("s3://bucket-name/path/")
我正在尝试读取存储在 s3 存储桶中的一些镶木地板文件。我正在使用以下代码:
s3 = boto3.resource('s3')
# get a handle on the bucket that holds your file
bucket = s3.Bucket('bucket_name')
# get a handle on the object you want (i.e. your file)
obj = bucket.Object(key = 'file/key/083b661babc54dd89139449d15fa22dd.snappy.parquet')
# get the object
response = obj.get()
# read the contents of the file and split it into a list of lines
lines = response[u'Body'].read().split('\n')
尝试执行最后一行代码时 lines = response[u'Body'].read().split('\n')
我收到以下错误:
TypeError: a bytes-like object is required, not 'str'
我不太确定如何解决这个问题。
我不得不使用以下 code:
而不是 boto3myAccessKey = 'your key'
mySecretKey = 'your key'
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.amazonaws:aws-java-sdk:1.10.34,org.apache.hadoop:hadoop-aws:2.6.0 pyspark-shell'
import pyspark
sc = pyspark.SparkContext("local[*]")
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
hadoopConf = sc._jsc.hadoopConfiguration()
hadoopConf.set("fs.s3.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
hadoopConf.set("fs.s3.awsAccessKeyId", myAccessKey)
hadoopConf.set("fs.s3.awsSecretAccessKey", mySecretKey)
df = sqlContext.read.parquet("s3://bucket-name/path/")