如何在 AWS 中使用 Glue 作业覆盖 s3 数据
How to override s3 data using Glue job in AWS
我有 dynamo db table 并且我正在使用胶水作业将 dynamo db 数据发送到 s3。每当 运行 将新数据更新到 s3 的粘合作业时,它也会附加旧数据。它应该覆盖下面的旧 data.Job 脚本
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = "abc", table_name = "xyz", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "abc", table_name = "xyz", transformation_ctx = "datasource0")
## @type: ApplyMapping
## @args: [mapping = [("address", "string", "address", "string"), ("name", "string", "name", "string"), ("company", "string", "company", "string"), ("id", "string", "id", "string")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("address", "string", "address", "string"), ("name", "string", "name", "string"), ("company", "string", "company", "string"), ("id", "string", "id", "string")], transformation_ctx = "applymapping1")
## @type: ResolveChoice
## @args: [choice = "make_struct", transformation_ctx = "resolvechoice2"]
## @return: resolvechoice2
## @inputs: [frame = applymapping1]
resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_struct", transformation_ctx = "resolvechoice2")
## @type: DropNullFields
## @args: [transformation_ctx = "dropnullfields3"]
## @return: dropnullfields3
## @inputs: [frame = resolvechoice2]
dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")
## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": "s3://xyztable"}, format = "parquet", transformation_ctx = "datasink4"]
## @return: datasink4
## @inputs: [frame = dropnullfields3]
datasink4 = glueContext.write_dynamic_frame.from_options(frame = dropnullfields3, connection_type = "s3", connection_options = {"path": "s3://xyztable"}, format = "parquet", transformation_ctx = "datasink4")
job.commit()
如果您试图覆盖 s3 中的数据,DynamicFrame 目前无法更改为保存模式,但您可以更改 toDF()
并使用共享的方法
用这个替换你的倒数第二行
df = dropnullfields3.toDF()
df.write.mode('overwrite').parquet('s3://xyzPath')
它会随时替换文件夹 运行 因为胶水库目前不支持模式,所以我们在这里使用 pyspark 库。
我有 dynamo db table 并且我正在使用胶水作业将 dynamo db 数据发送到 s3。每当 运行 将新数据更新到 s3 的粘合作业时,它也会附加旧数据。它应该覆盖下面的旧 data.Job 脚本
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = "abc", table_name = "xyz", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "abc", table_name = "xyz", transformation_ctx = "datasource0")
## @type: ApplyMapping
## @args: [mapping = [("address", "string", "address", "string"), ("name", "string", "name", "string"), ("company", "string", "company", "string"), ("id", "string", "id", "string")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("address", "string", "address", "string"), ("name", "string", "name", "string"), ("company", "string", "company", "string"), ("id", "string", "id", "string")], transformation_ctx = "applymapping1")
## @type: ResolveChoice
## @args: [choice = "make_struct", transformation_ctx = "resolvechoice2"]
## @return: resolvechoice2
## @inputs: [frame = applymapping1]
resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_struct", transformation_ctx = "resolvechoice2")
## @type: DropNullFields
## @args: [transformation_ctx = "dropnullfields3"]
## @return: dropnullfields3
## @inputs: [frame = resolvechoice2]
dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")
## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": "s3://xyztable"}, format = "parquet", transformation_ctx = "datasink4"]
## @return: datasink4
## @inputs: [frame = dropnullfields3]
datasink4 = glueContext.write_dynamic_frame.from_options(frame = dropnullfields3, connection_type = "s3", connection_options = {"path": "s3://xyztable"}, format = "parquet", transformation_ctx = "datasink4")
job.commit()
如果您试图覆盖 s3 中的数据,DynamicFrame 目前无法更改为保存模式,但您可以更改 toDF()
并使用共享的方法
用这个替换你的倒数第二行
df = dropnullfields3.toDF()
df.write.mode('overwrite').parquet('s3://xyzPath')
它会随时替换文件夹 运行 因为胶水库目前不支持模式,所以我们在这里使用 pyspark 库。