使用 sqlldr 同时使用 2 个会话 Pentaho kettle 将两个不同的 csv 文件加载到 2 个不同的表时出现异常
Getting exception when using 2 sessions Pentaho kettle simultaneously to load two different csv files into 2 different tables using sqlldr
当我调用两个不同的转换以将两组不同的 csv 文件加载到两个不同的表中时,我在控制台中遇到异常。这两个任务之间没有任何共同点。我正在从两个不同的控制台执行 kitchen.bat 来调用这些转换。
当 运行 在一起时,这两者中的一个最常失败,尽管在多次测试这种情况后并非总是如此。
运行 一次一个,不会出现任何错误并且 运行 成功。是什么导致了这个异常?
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR>SQL*Loader-951: Error calling once/load initialization
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR>ORA-00604: error occurred at recursive SQL level 1
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR>ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Error in step, asking everyone to stop because of:
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - IO exception occured: The pipe has been ended
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - The pipe has been ended
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Error while closing output
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : java.io.IOException: The pipe is being closed
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at java.io.FileOutputStream.writeBytes(Native Method)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at java.io.FileOutputStream.write(FileOutputStream.java:345)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at sun.nio.cs.StreamEncoder.implClose(StreamEncoder.java:316)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at sun.nio.cs.StreamEncoder.close(StreamEncoder.java:149)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at java.io.OutputStreamWriter.close(OutputStreamWriter.java:233)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at java.io.BufferedWriter.close(BufferedWriter.java:266)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at org.pentaho.di.trans.steps.orabulkloader.OraBulkDataOutput.close(OraBulkDataOutput.java:95)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at org.pentaho.di.trans.steps.orabulkloader.OraBulkLoader.dispose(OraBulkLoader.java:598)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:96)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at java.lang.Thread.run(Thread.java:745)
tasklist: 2019/10/04 14:27:51 - SOME_FILE_INPUT.0 - Finished processing (I=10058, O=0, R=5, W=10056, U=0, E=0)
tasklist: 2019/10/04 14:27:51 - SOME_TRANSFORMATION_NAME - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Errors detected!
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - Exit Value of sqlldr: 1
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - Finished processing (I=0, O=54, R=55, W=54, U=0, E=1)
tasklist: 2019/10/04 14:27:51 - SOME_TRANSFORMATION_NAME - Transformation detected one or more steps with errors.
tasklist: 2019/10/04 14:27:51 - SOME_TRANSFORMATION_NAME - Transformation is killing the other steps!
tasklist: 2019/10/04 14:27:51 - SOME_TRANSFORMATION_NAME - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Errors detected!
看起来 sql 加载程序锁定的次数超出您的预期,导致其他会话超时。
确保表没有连接它们的外键。如果不是这种情况,sql 加载程序可能会锁定整个模式或其他资源。
在每个 sqlldr 实例中使用唯一的控制文件解决了这个问题。
从具有相同控制文件的不同作业并行执行 sqlldrs 导致一个 sqlldr 实例覆盖先前由另一个 sqlldr 实例写入的控制文件中的数据,从而导致错误和锁定。
当我调用两个不同的转换以将两组不同的 csv 文件加载到两个不同的表中时,我在控制台中遇到异常。这两个任务之间没有任何共同点。我正在从两个不同的控制台执行 kitchen.bat 来调用这些转换。
当 运行 在一起时,这两者中的一个最常失败,尽管在多次测试这种情况后并非总是如此。 运行 一次一个,不会出现任何错误并且 运行 成功。是什么导致了这个异常?
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR>SQL*Loader-951: Error calling once/load initialization
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR>ORA-00604: error occurred at recursive SQL level 1
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR>ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Error in step, asking everyone to stop because of:
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - IO exception occured: The pipe has been ended
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - The pipe has been ended
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Error while closing output
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : java.io.IOException: The pipe is being closed
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at java.io.FileOutputStream.writeBytes(Native Method)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at java.io.FileOutputStream.write(FileOutputStream.java:345)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at sun.nio.cs.StreamEncoder.implClose(StreamEncoder.java:316)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at sun.nio.cs.StreamEncoder.close(StreamEncoder.java:149)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at java.io.OutputStreamWriter.close(OutputStreamWriter.java:233)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at java.io.BufferedWriter.close(BufferedWriter.java:266)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at org.pentaho.di.trans.steps.orabulkloader.OraBulkDataOutput.close(OraBulkDataOutput.java:95)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at org.pentaho.di.trans.steps.orabulkloader.OraBulkLoader.dispose(OraBulkLoader.java:598)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:96)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - at java.lang.Thread.run(Thread.java:745)
tasklist: 2019/10/04 14:27:51 - SOME_FILE_INPUT.0 - Finished processing (I=10058, O=0, R=5, W=10056, U=0, E=0)
tasklist: 2019/10/04 14:27:51 - SOME_TRANSFORMATION_NAME - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Errors detected!
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - Exit Value of sqlldr: 1
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - Finished processing (I=0, O=54, R=55, W=54, U=0, E=1)
tasklist: 2019/10/04 14:27:51 - SOME_TRANSFORMATION_NAME - Transformation detected one or more steps with errors.
tasklist: 2019/10/04 14:27:51 - SOME_TRANSFORMATION_NAME - Transformation is killing the other steps!
tasklist: 2019/10/04 14:27:51 - SOME_TRANSFORMATION_NAME - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Errors detected!
看起来 sql 加载程序锁定的次数超出您的预期,导致其他会话超时。
确保表没有连接它们的外键。如果不是这种情况,sql 加载程序可能会锁定整个模式或其他资源。
在每个 sqlldr 实例中使用唯一的控制文件解决了这个问题。
从具有相同控制文件的不同作业并行执行 sqlldrs 导致一个 sqlldr 实例覆盖先前由另一个 sqlldr 实例写入的控制文件中的数据,从而导致错误和锁定。