Database connector giving error: "Bulk mode operations require a Iterable/Iterator as payload. Got java.util.LinkedHashMap instead"
Database connector giving error: "Bulk mode operations require a Iterable/Iterator as payload. Got java.util.LinkedHashMap instead"
我正在使用数据映射器将有效负载转换为集合,然后再将其传递到数据库连接器。我在这里错过了什么?
代码如下:
`<data-mapper:config name="CSV_To_Map"`
transformationGraphPath="csv_to_map.grf" doc:name="CSV_To_Map"/>
<batch:job name="batchcustomerimport3.6.0Batch1">
<batch:input>
<file:inbound-endpoint path="C:\Users\hhernandez\testworkspace411\batchcustomerimport3.6.0Collection\src\test\resources\in" moveToPattern="#[message.inboundProperties.originalFileName]" moveToDirectory="C:\Users\hhernandez\testworkspace411\batchcustomerimport3.6.0Collection\src\test\resources\out" responseTimeout="10000" doc:name="File">
<file:filename-regex-filter pattern="CustomerImportTest.csv" caseSensitive="true"/>
</file:inbound-endpoint>
<file:file-to-string-transformer doc:name="File to String"/>
<data-mapper:transform config-ref="CSV_To_Map" doc:name="CSV To Map"/>
<logger level="INFO" doc:name="Logger"/>
</batch:input>
<batch:process-records>
<batch:step name="Batch_Step">
<db:update config-ref="Generic_Database_Configuration" bulkMode="true" doc:name="Database">
<db:parameterized-query><![CDATA[UPDATE FBF05P
SET STORE#X = #[payload['STORENO']]
WHERE VIP#X = #[payload['CUSTID']]]]>
</db:update>
</batch:step>
</batch:process-records>
<batch:on-complete>
<logger level="INFO" doc:name="Logger"/>
</batch:on-complete>
</batch:job>
更改批处理步骤以使用批处理更新记录 commit.like
<batch:step name="Batch_Step">
<batch:commit doc:name="Batch Commit" size="500">
<db:update config-ref="Generic_Database_Configuration" bulkMode="true" doc:name="Database">
<db:parameterized-query>
<![CDATA[
UPDATE FBF05P
SET STORE#X = #[payload['STORENO']]
WHERE VIP#X = #[payload['CUSTID']]
]]>
</db:parameterized-query>
</db:update>
</batch:commit>
</batch:step>
由于您的输入是列表,但在批处理步骤中它将是单个记录,即 Map。因此,通过批量提交,您可以发送一堆记录进行更新。
希望对您有所帮助。
我正在使用数据映射器将有效负载转换为集合,然后再将其传递到数据库连接器。我在这里错过了什么?
代码如下:
`<data-mapper:config name="CSV_To_Map"`
transformationGraphPath="csv_to_map.grf" doc:name="CSV_To_Map"/>
<batch:job name="batchcustomerimport3.6.0Batch1">
<batch:input>
<file:inbound-endpoint path="C:\Users\hhernandez\testworkspace411\batchcustomerimport3.6.0Collection\src\test\resources\in" moveToPattern="#[message.inboundProperties.originalFileName]" moveToDirectory="C:\Users\hhernandez\testworkspace411\batchcustomerimport3.6.0Collection\src\test\resources\out" responseTimeout="10000" doc:name="File">
<file:filename-regex-filter pattern="CustomerImportTest.csv" caseSensitive="true"/>
</file:inbound-endpoint>
<file:file-to-string-transformer doc:name="File to String"/>
<data-mapper:transform config-ref="CSV_To_Map" doc:name="CSV To Map"/>
<logger level="INFO" doc:name="Logger"/>
</batch:input>
<batch:process-records>
<batch:step name="Batch_Step">
<db:update config-ref="Generic_Database_Configuration" bulkMode="true" doc:name="Database">
<db:parameterized-query><![CDATA[UPDATE FBF05P
SET STORE#X = #[payload['STORENO']]
WHERE VIP#X = #[payload['CUSTID']]]]>
</db:update>
</batch:step>
</batch:process-records>
<batch:on-complete>
<logger level="INFO" doc:name="Logger"/>
</batch:on-complete>
</batch:job>
更改批处理步骤以使用批处理更新记录 commit.like
<batch:step name="Batch_Step">
<batch:commit doc:name="Batch Commit" size="500">
<db:update config-ref="Generic_Database_Configuration" bulkMode="true" doc:name="Database">
<db:parameterized-query>
<![CDATA[
UPDATE FBF05P
SET STORE#X = #[payload['STORENO']]
WHERE VIP#X = #[payload['CUSTID']]
]]>
</db:parameterized-query>
</db:update>
</batch:commit>
</batch:step>
由于您的输入是列表,但在批处理步骤中它将是单个记录,即 Map。因此,通过批量提交,您可以发送一堆记录进行更新。
希望对您有所帮助。