TFF 客户有没有办法拥有内部状态?
Is there a way for TFF clients to have internal states?
我看到的 TFF 教程和研究项目中的代码通常只跟踪服务器状态。我希望有内部客户端状态(例如,完全去中心化且不以联合方式更新的附加客户端内部神经网络)会影响联合客户端计算。
但是,在我看到的客户端计算中,它们只是服务器状态和数据的函数。有没有可能做到以上几点?
是的,这在 TFF 中很容易表达,并且在默认执行堆栈中执行得很好。
如您所见,TFF 存储库通常包含 cross-device 联合学习(教程的 Kairouz et. al 2019). Generally we talk about the state have tff.SERVER
placement, and the function signature for one "round" of federated learning has the structure (for details about TFF's type shorthand, see the Federated data 部分)的示例:
(<State@SERVER, {Dataset}@CLIENTS> -> State@Server)
我们可以通过简单地扩展签名来表示有状态客户端:
(<State@SERVER, {State}@Clients, {Dataset}@CLIENTS> -> <State@Server, {State}@Clients>)
实施包含客户端状态对象的联合平均 (McMahan et. al 2016) 版本可能类似于:
@tff.tf_computation(
model_type,
client_state_type, # additional state parameter
client_data_type)
def client_training_fn(model, state, dataset):
model_update, new_state = # do some local training
return model_update, new_state # return a tuple including updated state
@tff.federated_computation(
tff.FederatedType(server_state_type, tff.SERVER),
tff.FederatedType(client_state_type , tff.CLIENTS), # new parameter for state
tff.FederatedType(client_data_type , tff.CIENTS))
def run_fed_avg(server_state, client_states, client_datasets):
client_initial_models = tff.federated_broadcast(server_state.model)
client_updates, new_client_state = tff.federated_map(client_training_fn,
# Pass the client states as an argument.
(client_initial_models, client_states, client_datasets))
average_update = tff.federated_mean(client_updates)
new_server_state = tff.federated_map(server_update_fn, (server_state, average_update))
# Make sure to return the client states so they can be used in later rounds.
return new_server_state, new_client_states
run_fed_avg
的调用需要为每个参与回合的客户端传递 tensors/structures 的 Python list
,方法调用的结果将是服务器状态和客户端状态列表。
我看到的 TFF 教程和研究项目中的代码通常只跟踪服务器状态。我希望有内部客户端状态(例如,完全去中心化且不以联合方式更新的附加客户端内部神经网络)会影响联合客户端计算。
但是,在我看到的客户端计算中,它们只是服务器状态和数据的函数。有没有可能做到以上几点?
是的,这在 TFF 中很容易表达,并且在默认执行堆栈中执行得很好。
如您所见,TFF 存储库通常包含 cross-device 联合学习(教程的 Kairouz et. al 2019). Generally we talk about the state have tff.SERVER
placement, and the function signature for one "round" of federated learning has the structure (for details about TFF's type shorthand, see the Federated data 部分)的示例:
(<State@SERVER, {Dataset}@CLIENTS> -> State@Server)
我们可以通过简单地扩展签名来表示有状态客户端:
(<State@SERVER, {State}@Clients, {Dataset}@CLIENTS> -> <State@Server, {State}@Clients>)
实施包含客户端状态对象的联合平均 (McMahan et. al 2016) 版本可能类似于:
@tff.tf_computation(
model_type,
client_state_type, # additional state parameter
client_data_type)
def client_training_fn(model, state, dataset):
model_update, new_state = # do some local training
return model_update, new_state # return a tuple including updated state
@tff.federated_computation(
tff.FederatedType(server_state_type, tff.SERVER),
tff.FederatedType(client_state_type , tff.CLIENTS), # new parameter for state
tff.FederatedType(client_data_type , tff.CIENTS))
def run_fed_avg(server_state, client_states, client_datasets):
client_initial_models = tff.federated_broadcast(server_state.model)
client_updates, new_client_state = tff.federated_map(client_training_fn,
# Pass the client states as an argument.
(client_initial_models, client_states, client_datasets))
average_update = tff.federated_mean(client_updates)
new_server_state = tff.federated_map(server_update_fn, (server_state, average_update))
# Make sure to return the client states so they can be used in later rounds.
return new_server_state, new_client_states
run_fed_avg
的调用需要为每个参与回合的客户端传递 tensors/structures 的 Python list
,方法调用的结果将是服务器状态和客户端状态列表。