Jenkins 管道:for 循环中的超时未重置
Jenkins pipeline : timeout in a for loop is not reset
我有一个 jenkins 管道脚本,其中我使用 Warnings Next-Generation 插件根据某些任务产生的一些日志文件记录一些问题。
出于某种原因,有些日子有时工作会永远停滞在其中一个 recordIssues 任务上。
我用超时包围了对 recordIssues 调用的调用,它本身在 try / catch 块中,给每个 recordIssues 任务等待“正确”结束。
def record_issues_map = [
'UE4_AssetCheck' : 'DataValidation',
'UE4_Cook' : 'DataValidation',
'UE4_CompileBlueprints' : 'CompileAllBlueprints',
'UE4_MapCheckValidation' : 'MapCheckValidation',
]
timestamps {
for ( entry in record_issues_map ) {
try {
timeout(time: 120, unit: 'SECONDS') {
def log_file_path = "Saved\Logs\${entry.value}.log"
echo "recordIsses called for parser ${entry.key} on log file ${log_file_path}"
recordIssues enabledForFailure: true, failOnError: true, qualityGates: [[threshold: 1, type: 'TOTAL', unstable: false]], tools: [groovyScript(parserId: "${entry.key}", pattern: "${log_file_path}", reportEncoding: 'UTF-8')]
}
} catch ( e ) {
echo "Timeout during recording of issues for ${entry.key}"
}
}
}
但是当其中一个 recordIssues 超时时,之后执行的所有 recordIssues 仍然会因为超时而被取消,正如您在日志中看到的那样:
[Pipeline] timeout
Timeout set to expire in 2 min 0 sec
[Pipeline] {
[Pipeline] echo
recordIsses called for parser UE4_AssetCheck on log file Saved\Logs\DataValidation.log
[Pipeline] recordIssues
[Pipeline] }
[Pipeline] // timeout
[Pipeline] echo
Timeout during recording of issues for UE4_AssetCheck
[Pipeline] timeout
Timeout set to expire in 2 min 0 sec
[Pipeline] {
[Pipeline] echo
recordIsses called for parser UE4_Cook on log file Saved\Logs\DataValidation.log
[Pipeline] recordIssues
[Pipeline] }
[Pipeline] // timeout
[Pipeline] echo
Timeout during recording of issues for UE4_Cook
[Pipeline] timeout
Timeout set to expire in 2 min 0 sec
[Pipeline] {
[Pipeline] echo
recordIsses called for parser UE4_CompileBlueprints on log file Saved\Logs\CompileAllBlueprints.log
[Pipeline] recordIssues
[Pipeline] }
[Pipeline] // timeout
[Pipeline] echo
Timeout during recording of issues for UE4_CompileBlueprints
[Pipeline] timeout
Timeout set to expire in 2 min 0 sec
[Pipeline] {
[Pipeline] echo
recordIsses called for parser UE4_MapCheckValidation on log file Saved\Logs\MapCheckValidation.log
[Pipeline] recordIssues
[Pipeline] }
[Pipeline] // timeout
[Pipeline] echo
Timeout during recording of issues for UE4_MapCheckValidation
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Package Swarms (Win64))
[Pipeline] isUnix
[Pipeline] isUnix
[Pipeline] isUnix
[Pipeline] bat
有什么办法可以避免吗?
原来我得到的异常不是因为超时,而是因为https://issues.jenkins-ci.org/browse/JENKINS-49732
我用 record_issues_map.each { entry -> {} }
替换了 for 循环以消除异常。
为了解决 recordIssues 的超时问题,我先调用 scanForIssues,然后调用 publishIssues
我有一个 jenkins 管道脚本,其中我使用 Warnings Next-Generation 插件根据某些任务产生的一些日志文件记录一些问题。
出于某种原因,有些日子有时工作会永远停滞在其中一个 recordIssues 任务上。
我用超时包围了对 recordIssues 调用的调用,它本身在 try / catch 块中,给每个 recordIssues 任务等待“正确”结束。
def record_issues_map = [
'UE4_AssetCheck' : 'DataValidation',
'UE4_Cook' : 'DataValidation',
'UE4_CompileBlueprints' : 'CompileAllBlueprints',
'UE4_MapCheckValidation' : 'MapCheckValidation',
]
timestamps {
for ( entry in record_issues_map ) {
try {
timeout(time: 120, unit: 'SECONDS') {
def log_file_path = "Saved\Logs\${entry.value}.log"
echo "recordIsses called for parser ${entry.key} on log file ${log_file_path}"
recordIssues enabledForFailure: true, failOnError: true, qualityGates: [[threshold: 1, type: 'TOTAL', unstable: false]], tools: [groovyScript(parserId: "${entry.key}", pattern: "${log_file_path}", reportEncoding: 'UTF-8')]
}
} catch ( e ) {
echo "Timeout during recording of issues for ${entry.key}"
}
}
}
但是当其中一个 recordIssues 超时时,之后执行的所有 recordIssues 仍然会因为超时而被取消,正如您在日志中看到的那样:
[Pipeline] timeout
Timeout set to expire in 2 min 0 sec
[Pipeline] {
[Pipeline] echo
recordIsses called for parser UE4_AssetCheck on log file Saved\Logs\DataValidation.log
[Pipeline] recordIssues
[Pipeline] }
[Pipeline] // timeout
[Pipeline] echo
Timeout during recording of issues for UE4_AssetCheck
[Pipeline] timeout
Timeout set to expire in 2 min 0 sec
[Pipeline] {
[Pipeline] echo
recordIsses called for parser UE4_Cook on log file Saved\Logs\DataValidation.log
[Pipeline] recordIssues
[Pipeline] }
[Pipeline] // timeout
[Pipeline] echo
Timeout during recording of issues for UE4_Cook
[Pipeline] timeout
Timeout set to expire in 2 min 0 sec
[Pipeline] {
[Pipeline] echo
recordIsses called for parser UE4_CompileBlueprints on log file Saved\Logs\CompileAllBlueprints.log
[Pipeline] recordIssues
[Pipeline] }
[Pipeline] // timeout
[Pipeline] echo
Timeout during recording of issues for UE4_CompileBlueprints
[Pipeline] timeout
Timeout set to expire in 2 min 0 sec
[Pipeline] {
[Pipeline] echo
recordIsses called for parser UE4_MapCheckValidation on log file Saved\Logs\MapCheckValidation.log
[Pipeline] recordIssues
[Pipeline] }
[Pipeline] // timeout
[Pipeline] echo
Timeout during recording of issues for UE4_MapCheckValidation
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Package Swarms (Win64))
[Pipeline] isUnix
[Pipeline] isUnix
[Pipeline] isUnix
[Pipeline] bat
有什么办法可以避免吗?
原来我得到的异常不是因为超时,而是因为https://issues.jenkins-ci.org/browse/JENKINS-49732
我用 record_issues_map.each { entry -> {} }
替换了 for 循环以消除异常。
为了解决 recordIssues 的超时问题,我先调用 scanForIssues,然后调用 publishIssues