如何正确地将工作节点添加到我的集群?

How do I correctly add worker nodes to my cluster?

我正在尝试在 Google 云上创建具有以下参数的集群:

  1. 1 名硕士
  2. 7 个工作节点
  3. 每个都有 1 个 vCPU
  4. 主节点应获得完整的 SSD 容量,工作节点应获得均等的标准磁盘容量。

这是我的代码:

#Create the cluster
CLUSTER = '{}-cluster'.format(PROJECT)
!gcloud dataproc clusters create $CLUSTER \
    --image-version 1.5-ubuntu18 --single-node \
    --master-machine-type n1-standard-1 \
    --master-boot-disk-type pd-ssd --master-boot-disk-size 100 \
    --num-workers 7 \
    --worker-machine-type n1-standard-1 \
    --worker-boot-disk-type pd-standard --worker-boot-disk-size 200 \
    --max-idle 3600s \

这是我的错误:

RROR: (gcloud.dataproc.clusters.create) argument --single-node: At most one of --single-node | --num-secondary-workers --num-workers --secondary-worker-type can be specified.

更新尝试:

#Create the cluster
CLUSTER = '{}-cluster'.format(PROJECT)
!gcloud dataproc clusters create $CLUSTER \
    --image-version 1.5-ubuntu18 \
    --master-machine-type n1-standard-1 \
    --master-boot-disk-type pd-ssd --master-boot-disk-size 100 \
    --num-secondary-workers = 7 \
    --secondary-worker-type=non-preemptible \
    --secondary-worker-boot-disk-type pd-standard \
    --secondary-worker-boot-disk-size=200 \
    --max-idle 3600s \
    --initialization-actions=gs://goog-dataproc-initialization-actions-$REGION/python/pip-install.sh \
    --metadata=PIP_PACKAGES=tensorflow==2.4.0

我不明白我在这里做错了什么。谁能给点建议?

文档应该对 gcloud dataproc clusters create 有所帮助。它解释说:

文档描述 Secondary workers