如何像 Amazon Redshift 一样将数据从 Amazon Aurora MySQL 卸载到 Amazon S3?
How can I unload data from Amazon Aurora MySQL to Amazon S3 like Amazon Redshift?
我有一个 Amazon Elastic Map Reduce (EMR) 作业,我想用它来处理从 Amazon Aurora MySQL table 卸载的数据,就像我在 Amazon Redshift 中处理的一样。即,运行 查询如:
unload ('select * from whatever where week = \'2011/11/21\'') to 's3://somebucket' credentials 'blah'
然后,EMR 作业处理转储数据中的行并写回 S3。
这可能吗?怎么样?
最初写完这个答案后(第一次的答案是"no"),Aurora增加了这个能力。
You can now use the SELECT INTO OUTFILE S3
SQL statement to query data from an Amazon Aurora database cluster and save it directly into text files in an Amazon S3 bucket. This means you no longer need the two-step process of bringing the data to the SQL client and then copying it from the client to Amazon S3. It’s an easy way to export data selectively to Amazon Redshift or any other application.
https://aws.amazon.com/about-aws/whats-new/2017/06/amazon-aurora-can-export-data-into-amazon-s3/
MySQL 的 Aurora 不支持这个。
如您所知,在传统服务器上,MySQL 具有两个互补功能,LOAD DATA INFILE
和 SELECT INTO OUTFILE
,它们与本地(到服务器)文件一起使用。在 2016 年底,Aurora announced 一个类似于 LOAD DATA INFILE
的 S3 -- LOAD DATA FROM S3
-- 但至少目前还没有相反的功能。
现在似乎支持此功能。该命令称为 SELECT INTO OUTFILE S3
.
您可以使用 SELECT INTO OUTFILE S3 语句从 Amazon Aurora MySQL 数据库集群查询数据并将其直接保存到存储在 Amazon S3 存储桶中的文本文件中。这个功能是很久以前添加的。
示例:
SELECT * FROM employees INTO OUTFILE S3 's3-us-west-2://aurora-select-into-s3-pdx/sample_employee_data'
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n';
这里是所有支持的选项:
SELECT
[ALL | DISTINCT | DISTINCTROW ]
[HIGH_PRIORITY]
[STRAIGHT_JOIN]
[SQL_SMALL_RESULT] [SQL_BIG_RESULT] [SQL_BUFFER_RESULT]
[SQL_CACHE | SQL_NO_CACHE] [SQL_CALC_FOUND_ROWS]
select_expr [, select_expr ...]
[FROM table_references
[PARTITION partition_list]
[WHERE where_condition]
[GROUP BY {col_name | expr | position}
[ASC | DESC], ... [WITH ROLLUP]]
[HAVING where_condition]
[ORDER BY {col_name | expr | position}
[ASC | DESC], ...]
[LIMIT {[offset,] row_count | row_count OFFSET offset}]
[PROCEDURE procedure_name(argument_list)]
INTO OUTFILE S3 's3_uri'
[CHARACTER SET charset_name]
[export_options]
[MANIFEST {ON | OFF}]
[OVERWRITE {ON | OFF}]
export_options:
[{FIELDS | COLUMNS}
[TERMINATED BY 'string']
[[OPTIONALLY] ENCLOSED BY 'char']
[ESCAPED BY 'char']
]
[LINES
[STARTING BY 'string']
[TERMINATED BY 'string']
]
您可以在此处的 AWS 文档中找到它:https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraMySQL.Integrating.SaveIntoS3.html
我有一个 Amazon Elastic Map Reduce (EMR) 作业,我想用它来处理从 Amazon Aurora MySQL table 卸载的数据,就像我在 Amazon Redshift 中处理的一样。即,运行 查询如:
unload ('select * from whatever where week = \'2011/11/21\'') to 's3://somebucket' credentials 'blah'
然后,EMR 作业处理转储数据中的行并写回 S3。
这可能吗?怎么样?
最初写完这个答案后(第一次的答案是"no"),Aurora增加了这个能力。
You can now use the
SELECT INTO OUTFILE S3
SQL statement to query data from an Amazon Aurora database cluster and save it directly into text files in an Amazon S3 bucket. This means you no longer need the two-step process of bringing the data to the SQL client and then copying it from the client to Amazon S3. It’s an easy way to export data selectively to Amazon Redshift or any other application.https://aws.amazon.com/about-aws/whats-new/2017/06/amazon-aurora-can-export-data-into-amazon-s3/
MySQL 的 Aurora 不支持这个。
如您所知,在传统服务器上,MySQL 具有两个互补功能,LOAD DATA INFILE
和 SELECT INTO OUTFILE
,它们与本地(到服务器)文件一起使用。在 2016 年底,Aurora announced 一个类似于 LOAD DATA INFILE
的 S3 -- LOAD DATA FROM S3
-- 但至少目前还没有相反的功能。
现在似乎支持此功能。该命令称为 SELECT INTO OUTFILE S3
.
您可以使用 SELECT INTO OUTFILE S3 语句从 Amazon Aurora MySQL 数据库集群查询数据并将其直接保存到存储在 Amazon S3 存储桶中的文本文件中。这个功能是很久以前添加的。
示例:
SELECT * FROM employees INTO OUTFILE S3 's3-us-west-2://aurora-select-into-s3-pdx/sample_employee_data'
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n';
这里是所有支持的选项:
SELECT
[ALL | DISTINCT | DISTINCTROW ]
[HIGH_PRIORITY]
[STRAIGHT_JOIN]
[SQL_SMALL_RESULT] [SQL_BIG_RESULT] [SQL_BUFFER_RESULT]
[SQL_CACHE | SQL_NO_CACHE] [SQL_CALC_FOUND_ROWS]
select_expr [, select_expr ...]
[FROM table_references
[PARTITION partition_list]
[WHERE where_condition]
[GROUP BY {col_name | expr | position}
[ASC | DESC], ... [WITH ROLLUP]]
[HAVING where_condition]
[ORDER BY {col_name | expr | position}
[ASC | DESC], ...]
[LIMIT {[offset,] row_count | row_count OFFSET offset}]
[PROCEDURE procedure_name(argument_list)]
INTO OUTFILE S3 's3_uri'
[CHARACTER SET charset_name]
[export_options]
[MANIFEST {ON | OFF}]
[OVERWRITE {ON | OFF}]
export_options:
[{FIELDS | COLUMNS}
[TERMINATED BY 'string']
[[OPTIONALLY] ENCLOSED BY 'char']
[ESCAPED BY 'char']
]
[LINES
[STARTING BY 'string']
[TERMINATED BY 'string']
]
您可以在此处的 AWS 文档中找到它:https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraMySQL.Integrating.SaveIntoS3.html