在每轮参与者数量不同的情况下训练 FL 模型时内存消耗激增
Exploding memory consumption when training FL model with varying number of participants per round
我是 运行 FL 算法,遵循 image classification 教程。根据预定义的参与者人数列表,每一轮的参与者人数都会有所不同。
number_of_participants_each_round =
[108, 113, 93, 92, 114, 101, 94, 93, 107, 99, 118, 101, 114, 111, 88,
101, 86, 96, 110, 80, 118, 84, 91, 120, 110, 109, 113, 96, 112, 107,
119, 91, 97, 99, 97, 104, 103, 120, 89, 100, 104, 104, 103, 88, 108]
联合数据在开始训练之前进行预处理和批处理。
NUM_EPOCHS = 5
BATCH_SIZE = 20
SHUFFLE_BUFFER = 418
PREFETCH_BUFFER = 10
def preprocess(dataset):
def batch_format_fn(element):
return collections.OrderedDict(
x=tf.reshape(element['pixels'], [-1, 784]),
y=tf.reshape(element['label'], [-1, 1]))
return dataset.repeat(NUM_EPOCHS).shuffle(SHUFFLE_BUFFER).batch(
BATCH_SIZE).map(batch_format_fn).prefetch(PREFETCH_BUFFER)
def make_federated_data(client_data, client_ids):
return [preprocess(client_data.create_tf_dataset_for_client(x)) for x in client_ids]
federated_train_data = make_federated_data(data_train, data_train.client_ids)
每轮根据number_of_participants_each_round
从federated_train_data[0:expected_total_clients]
中随机抽取参与者,然后iterative_process
执行45 rounds
。
expected_total_clients = 500
round_nums = 45
for round_num in range(0, round_nums):
sampled_clients =
np.random.choice(a=federated_train_data[0:expected_total_clients],
size=number_of_participants_each_round[round_num],
replace=False)
state, metrics = iterative_process.next(state, list(sampled_clients))
print('round {:2d}, metrics={}'.format(round_num + 1, metrics))
问题是 VRAM
的使用率在几轮后迅速爆发,在 6~7
轮达到 5.5 GB
,并以大约 0.8 GB/round
的速度增长直到训练最终在第 25~26
轮崩溃,其中 VRAM 达到 17 GB
,创建了 +4000
python 个线程。
下面的错误信息
F tensorflow/core/platform/default/env.cc:72] Check failed: ret == 0 (35 vs. 0)Thread creation via pthread_create() failed.
### 疑难解答###
将 number_of_participants_each_round
减少到 20
可以完成训练,但内存消耗仍然很大并且还在增长。
运行 相同的代码,每轮有固定数量的参与者,内存消耗很好,在整个训练过程中总共有大约 1.5 ~ 2.0 GB
VRAM。
expected_total_clients = 500
fixed_client_size_per_round = 100
round_nums = 45
for round_num in range(0, round_nums):
sampled_clients =
np.random.choice(a=federated_train_data[0:expected_total_clients],
size=fixed_client_size_per_round,
replace=False)
state, metrics = iterative_process.next(state, list(sampled_clients))
print('round {:2d}, metrics={}'.format(round_num + 1, metrics))
额外的细节:
OS: MacOS Mojave, 10.14.6
python -V: Python 3.8.5 then downgraded to Python 3.7.9
TF version: 2.4.1
TFF version: 0.18.0
Keras version: 2.4.3
这是正常的内存行为还是 bug
?有没有什么refactoring/hints可以优化内存消耗?
问题是 TFF 运行时进程 executor stack
中的错误。
下面是完整的详细信息和错误修复
我是 运行 FL 算法,遵循 image classification 教程。根据预定义的参与者人数列表,每一轮的参与者人数都会有所不同。
number_of_participants_each_round =
[108, 113, 93, 92, 114, 101, 94, 93, 107, 99, 118, 101, 114, 111, 88,
101, 86, 96, 110, 80, 118, 84, 91, 120, 110, 109, 113, 96, 112, 107,
119, 91, 97, 99, 97, 104, 103, 120, 89, 100, 104, 104, 103, 88, 108]
联合数据在开始训练之前进行预处理和批处理。
NUM_EPOCHS = 5
BATCH_SIZE = 20
SHUFFLE_BUFFER = 418
PREFETCH_BUFFER = 10
def preprocess(dataset):
def batch_format_fn(element):
return collections.OrderedDict(
x=tf.reshape(element['pixels'], [-1, 784]),
y=tf.reshape(element['label'], [-1, 1]))
return dataset.repeat(NUM_EPOCHS).shuffle(SHUFFLE_BUFFER).batch(
BATCH_SIZE).map(batch_format_fn).prefetch(PREFETCH_BUFFER)
def make_federated_data(client_data, client_ids):
return [preprocess(client_data.create_tf_dataset_for_client(x)) for x in client_ids]
federated_train_data = make_federated_data(data_train, data_train.client_ids)
每轮根据number_of_participants_each_round
从federated_train_data[0:expected_total_clients]
中随机抽取参与者,然后iterative_process
执行45 rounds
。
expected_total_clients = 500
round_nums = 45
for round_num in range(0, round_nums):
sampled_clients =
np.random.choice(a=federated_train_data[0:expected_total_clients],
size=number_of_participants_each_round[round_num],
replace=False)
state, metrics = iterative_process.next(state, list(sampled_clients))
print('round {:2d}, metrics={}'.format(round_num + 1, metrics))
问题是 VRAM
的使用率在几轮后迅速爆发,在 6~7
轮达到 5.5 GB
,并以大约 0.8 GB/round
的速度增长直到训练最终在第 25~26
轮崩溃,其中 VRAM 达到 17 GB
,创建了 +4000
python 个线程。
下面的错误信息
F tensorflow/core/platform/default/env.cc:72] Check failed: ret == 0 (35 vs. 0)Thread creation via pthread_create() failed.
### 疑难解答###
将 number_of_participants_each_round
减少到 20
可以完成训练,但内存消耗仍然很大并且还在增长。
运行 相同的代码,每轮有固定数量的参与者,内存消耗很好,在整个训练过程中总共有大约 1.5 ~ 2.0 GB
VRAM。
expected_total_clients = 500
fixed_client_size_per_round = 100
round_nums = 45
for round_num in range(0, round_nums):
sampled_clients =
np.random.choice(a=federated_train_data[0:expected_total_clients],
size=fixed_client_size_per_round,
replace=False)
state, metrics = iterative_process.next(state, list(sampled_clients))
print('round {:2d}, metrics={}'.format(round_num + 1, metrics))
额外的细节:
OS: MacOS Mojave, 10.14.6
python -V: Python 3.8.5 then downgraded to Python 3.7.9
TF version: 2.4.1
TFF version: 0.18.0
Keras version: 2.4.3
这是正常的内存行为还是 bug
?有没有什么refactoring/hints可以优化内存消耗?
问题是 TFF 运行时进程 executor stack
中的错误。
下面是完整的详细信息和错误修复