AWS DMS S3 到 SQL 服务器迁移 - 将 DATETIME2 指定为 SQL 服务器数据类型
AWS DMS S3 to SQL Server Migrations - Specify DATETIME2 as SQL Server data type
我正在尝试使用 AWS 数据库迁移服务 (DMS) 从 S3 填充 SQL Server 2014 table。我有以下 S3 模式:
{
"TableCount": "1",
"Tables": [
{
"TableName": "employee",
"TablePath": "public/employee/",
"TableOwner": "",
"TableColumns": [
{
"ColumnName": "Id",
"ColumnType": "INT8",
"ColumnNullable": "false",
"ColumnIsPk": "true"
},
{
"ColumnName": "HireDate",
"ColumnType": "TIMESTAMP"
},
{
"ColumnName": "Name",
"ColumnType": "STRING",
"ColumnLength": "20"
}
],
"TableColumnsTotal": "3"
}
]
}
当我 运行 迁移任务时,我收到以下溢出错误,因为 SQL 服务器不允许将来自 S3 的值 2018-04-11 08:02:16.788027
插入 SQL 服务器 DATETIME
列。
我的问题是,有没有一种方法可以告诉 AWS DMS 将 TIMESTAMP
S3 数据创建为 SQL 服务器中的 DATETIME2
列? 注意: 每次迁移 运行 时,都会删除并重新创建 table。我可以通过在 SQL-Server 中手动创建 table 来解决这个问题,HireDate 为 DATETIME2
,然后将 DMS 迁移 'target table preparation mode' 设置为 TRUNCATE
而不是 drop/create 但这对我当前的解决方案来说并不理想。
[TARGET_LOAD ]E: Failed to execute statement: 'INSERT INTO [public].[employee]([Id],[HireDate],[Name]) values (?,?,?)' [1022502] (ar_odbc_stmt.c:2456)
[TARGET_LOAD ]E: RetCode: SQL_ERROR SqlState: 22008 NativeError: 0 Message: [Microsoft][ODBC Driver 13 for SQL Server]Datetime field overflow. Fractional second precision exceeds the scale specified in the parameter binding. Line: 1 Column: 4 [1022502] (ar_odbc_stmt.c:2462)
[TARGET_LOAD ]E: Invalid input for column 'HireDate' of table 'public'.'employee' in line number 1.(sqlserver_endpoint_imp.c:2357)
解决方案是在 S3 架构中为 timestamp
列指定 "ColumnScale"
属性,这将确保 SQL 服务器目标列创建为 DATETIME2(7)
即
{
"TableCount": "1",
"Tables": [
{
"TableName": "employee",
"TablePath": "public/employee/",
"TableOwner": "",
"TableColumns": [
{
"ColumnName": "Id",
"ColumnType": "INT8",
"ColumnNullable": "false",
"ColumnIsPk": "true"
},
{
"ColumnName": "HireDate",
"ColumnType": "TIMESTAMP",
"ColumnScale": "7",
},
{
"ColumnName": "Name",
"ColumnType": "STRING",
"ColumnLength": "20"
}
],
"TableColumnsTotal": "3"
}
]
}
我正在尝试使用 AWS 数据库迁移服务 (DMS) 从 S3 填充 SQL Server 2014 table。我有以下 S3 模式:
{
"TableCount": "1",
"Tables": [
{
"TableName": "employee",
"TablePath": "public/employee/",
"TableOwner": "",
"TableColumns": [
{
"ColumnName": "Id",
"ColumnType": "INT8",
"ColumnNullable": "false",
"ColumnIsPk": "true"
},
{
"ColumnName": "HireDate",
"ColumnType": "TIMESTAMP"
},
{
"ColumnName": "Name",
"ColumnType": "STRING",
"ColumnLength": "20"
}
],
"TableColumnsTotal": "3"
}
]
}
当我 运行 迁移任务时,我收到以下溢出错误,因为 SQL 服务器不允许将来自 S3 的值 2018-04-11 08:02:16.788027
插入 SQL 服务器 DATETIME
列。
我的问题是,有没有一种方法可以告诉 AWS DMS 将 TIMESTAMP
S3 数据创建为 SQL 服务器中的 DATETIME2
列? 注意: 每次迁移 运行 时,都会删除并重新创建 table。我可以通过在 SQL-Server 中手动创建 table 来解决这个问题,HireDate 为 DATETIME2
,然后将 DMS 迁移 'target table preparation mode' 设置为 TRUNCATE
而不是 drop/create 但这对我当前的解决方案来说并不理想。
[TARGET_LOAD ]E: Failed to execute statement: 'INSERT INTO [public].[employee]([Id],[HireDate],[Name]) values (?,?,?)' [1022502] (ar_odbc_stmt.c:2456)
[TARGET_LOAD ]E: RetCode: SQL_ERROR SqlState: 22008 NativeError: 0 Message: [Microsoft][ODBC Driver 13 for SQL Server]Datetime field overflow. Fractional second precision exceeds the scale specified in the parameter binding. Line: 1 Column: 4 [1022502] (ar_odbc_stmt.c:2462)
[TARGET_LOAD ]E: Invalid input for column 'HireDate' of table 'public'.'employee' in line number 1.(sqlserver_endpoint_imp.c:2357)
解决方案是在 S3 架构中为 timestamp
列指定 "ColumnScale"
属性,这将确保 SQL 服务器目标列创建为 DATETIME2(7)
即
{
"TableCount": "1",
"Tables": [
{
"TableName": "employee",
"TablePath": "public/employee/",
"TableOwner": "",
"TableColumns": [
{
"ColumnName": "Id",
"ColumnType": "INT8",
"ColumnNullable": "false",
"ColumnIsPk": "true"
},
{
"ColumnName": "HireDate",
"ColumnType": "TIMESTAMP",
"ColumnScale": "7",
},
{
"ColumnName": "Name",
"ColumnType": "STRING",
"ColumnLength": "20"
}
],
"TableColumnsTotal": "3"
}
]
}