无法使用 Googles Gcloud Bazel 构建器并行构建

Unable to build in parallel using the Googles Gcloud Bazel builder

在基于 cloudbuild.yaml 构建基于 bazel 的 GCP 中,我使用 waitFor: ['-'] 关键字并行构建并使用 bazel gcr。io/cloud-builders/bazel -建设者。当我尝试使用上述构建器构建多个步骤并使用 waitFor: ['-'] 这将创建并行构建。问题是当尝试这种方式时,我收到如下所示的错误并且构建失败,但是当我删除 waitFor: ['-'] 关键字时,构建按顺序发生并且构建成功完成。在 Gcloud 的 Bazel 构建器中是否有我必须更改的 Bazel 配置?并行构建时出现如下错误:

Another command holds the client lock: 
pid=12
owner=client
cwd=/workspace

Waiting for it to complete...
Another command holds the client lock: 
pid=13
owner=client
cwd=/workspace

Waiting for it to complete...
Starting local Bazel server and connecting to it...

我的 cloudbuild.yaml 如下所示:

steps:

-   name: "gcr.io/cloud-builders/bazel"
    id: "Building Bazel components and Uploading the component manifest for ml_cmp_1 "
    entrypoint: "bash"
    args:
        - "-c"
        - |
            cmp_dir=components/train_test_split_1
            cmp_bazel_file="$cmp_dir/BUILD.bazel"
            bazel run --remote_cache=${_BAZEL_CACHE_URL} --google_default_credentials --define=PROJ_ID=${_PROJECT_ID} //$cmp_dir:container_push

    waitFor: ["-"]

-   name: "gcr.io/cloud-builders/bazel"
    id: "Building Bazel components and Uploading the component manifest for ml_cmp_2 "
    entrypoint: "bash"
    args:
        - "-c"
        - |
            cmp_dir=components/train_test_split_2
            cmp_bazel_file="$cmp_dir/BUILD.bazel"
            bazel run --remote_cache=${_BAZEL_CACHE_URL} --google_default_credentials --define=PROJ_ID=${_PROJECT_ID} //$cmp_dir:container_push

    waitFor: ["-"]


timeout: 86399s
logsBucket: gs://some_project_id_cloudbuild/logs
options:
  machineType: 'N1_HIGHCPU_8'
  substitution_option: 'ALLOW_LOOSE'

substitutions:
    _PROJECT_ID: "some_project_id"
    _BAZEL_CACHE_URL: "https://storage.googleapis.com/some_project_id_cloudbuild/bazel-cache"

如果您想 运行 并行执行多个 bazel 命令,它们每个都需要自己的 --output_base 标志。这将使每个构建独立。

如果您希望它们在同一台构建机器中共享中间输出,共享 --disk_cache is the simplest approach. You could also set up a full remote cache which would cache outputs across build machines. If you want parallel builds to deduplicate common actions fully, you'll need to set up remote execution.