如何使用sqoop将hive数据导出到mysql的指定字段?
How to export hive data to specified fields of mysql using sqoop?
我在hdfs中有field1, field2, field3
,在mysql中有id(auto increment), field1, field2, field3, uptate_time(default CURRENT_TIMESTAMP)
table,我想将hdfs中的三个字段导出到mysqltable 其中有五个字段,其中两个字段具有默认值。如何在sqoop中实现?
使用 --columns "<comma separated column names>"
导出到选定的列。
根据 Table 29 中的 sqoop docs,
You can select a subset of columns and control their ordering by using the --columns
argument. This should include a comma-delimited list of columns to export. For example: --columns "col1,col2,col3"
. Note that columns that are not included in the --columns
parameter need to have either defined default value or allow NULL values. Otherwise your database will reject the imported data which in turn will make Sqoop job fail.
我在hdfs中有field1, field2, field3
,在mysql中有id(auto increment), field1, field2, field3, uptate_time(default CURRENT_TIMESTAMP)
table,我想将hdfs中的三个字段导出到mysqltable 其中有五个字段,其中两个字段具有默认值。如何在sqoop中实现?
使用 --columns "<comma separated column names>"
导出到选定的列。
根据 Table 29 中的 sqoop docs,
You can select a subset of columns and control their ordering by using the
--columns
argument. This should include a comma-delimited list of columns to export. For example:--columns "col1,col2,col3"
. Note that columns that are not included in the--columns
parameter need to have either defined default value or allow NULL values. Otherwise your database will reject the imported data which in turn will make Sqoop job fail.