'with strategy.scope():' 或 'with tf.distribute.experimental.TPUStrategy(tpu).scope():' 对 NN 的创建有何作用?

What does 'with strategy.scope():' or 'with tf.distribute.experimental.TPUStrategy(tpu).scope():' do to the creation of a NN?

在此处的代码中: https://www.kaggle.com/ryanholbrook/detecting-the-higgs-boson-with-tpus

在编译模型之前,使用以下代码制作模型:

with strategy.scope():
    # Wide Network
    wide = keras.experimental.LinearModel()

    # Deep Network
    inputs = keras.Input(shape=[28])
    x = dense_block(UNITS, ACTIVATION, DROPOUT)(inputs)
    x = dense_block(UNITS, ACTIVATION, DROPOUT)(x)
    x = dense_block(UNITS, ACTIVATION, DROPOUT)(x)
    x = dense_block(UNITS, ACTIVATION, DROPOUT)(x)
    x = dense_block(UNITS, ACTIVATION, DROPOUT)(x)
    outputs = layers.Dense(1)(x)
    deep = keras.Model(inputs=inputs, outputs=outputs)
    
    # Wide and Deep Network
    wide_and_deep = keras.experimental.WideDeepModel(
        linear_model=wide,
        dnn_model=deep,
        activation='sigmoid',
    )

我不明白 with strategy.scope() 在这里做了什么,以及它是否以任何方式影响模型。它具体是做什么的?

以后我怎么弄清楚这是干什么的?我需要查看哪些资源才能解决这个问题?

作为 TF2 的一部分引入了分布策略,以帮助在多个 GPU、多台机器或 TPU 之间分布训练,同时代码更改最少。我推荐这个 guide to distributed training for starters

专门在 TPUStrategy 下创建模型会将模型以复制(每个核心上的权重相同)的方式放置在 TPU 上,并通过添加适当的集体通信来保持副本权重同步(所有减少梯度)。有关详细信息,请查看 API doc on TPUStrategy as well as this intro to TPUs in TF2 colab notebook.