fedsim.fl.algorithms package#

Submodules#

fedsim.fl.algorithms.adabest module#

This file contains an implementation of the following paper: Title: “Minimizing Client Drift in Federated Learning via Adaptive Bias Estimation” Authors: Farshid Varno, Marzie Saghayi, Laya Rafiee, Sharut Gupta, Stan Matwin, Mohammad Havaei Publication date: [Submitted on 27 Apr 2022 (v1), last revised 23 May 2022 (this version, v2)] Link: https://arxiv.org/abs/2204.13170

class fedsim.fl.algorithms.adabest.AdaBest(data_manager, metric_logger, num_clients, sample_scheme, sample_rate, model_class, epochs, loss_fn, batch_size=32, test_batch_size=64, local_weight_decay=0.0, slr=1.0, clr=0.1, clr_decay=1.0, clr_decay_type='step', min_clr=1e-12, clr_step_size=1000, device='cuda', log_freq=10, mu=0.02, beta=0.98, *args, **kwargs)[source]#

Bases: fedsim.fl.algorithms.fedavg.FedAvg

deploy()[source]#

return Mapping of name -> parameters_set to test the model

Raises

NotImplementedError – abstract class to be implemented by child

optimize(aggregator)[source]#

optimize server mdoel(s) and return metrics to be reported

Parameters

aggregator (Any) – Aggregator instance

Raises

NotImplementedError – abstract class to be implemented by child

Returns

Mapping[Hashable, Any] – context to be reported

receive_from_client(client_id, client_msg, aggregation_results)[source]#

receive and aggregate info from selected clients

Parameters
  • client_id (int) – id of the sender (client)

  • client_msg (Mapping[Hashable, Any]) – client context that is sent

  • aggregator (Any) – aggregator instance to collect info

Raises

NotImplementedError – abstract class to be implemented by child

send_to_server(client_id, datasets, epochs, loss_fn, batch_size, lr, weight_decay=0, device='cuda', ctx=None, *args, **kwargs)[source]#

client operation on the recieved information.

Parameters
  • client_id (int) – id of the client

  • datasets (Dict[str, Iterable]) – this comes from Data Manager

  • epochs (int) – number of epochs to train

  • loss_fn (nn.Module) – either ‘ce’ (for cross-entropy) or ‘mse’

  • batch_size (int) – training batch_size

  • lr (float) – client learning rate

  • weight_decay (float, optional) – weight decay for SGD. Defaults to 0.

  • device (Union[int, str], optional) – Defaults to ‘cuda’.

  • ctx (Optional[Dict[Hashable, Any]], optional) – context reveived from server. Defaults to None.

Raises

NotImplementedError – abstract class to be implemented by child

Returns

Mapping[str, Any] – client context to be sent to the server

fedsim.fl.algorithms.fedavg module#

This file contains an implementation of the following paper: Title: “Communication-Efficient Learning of Deep Networks from Decentralized Data” Authors: H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Agüera y Arcas Publication date: February 17th, 2016 Link: https://arxiv.org/abs/1602.05629

class fedsim.fl.algorithms.fedavg.FedAvg(data_manager, metric_logger, num_clients, sample_scheme, sample_rate, model_class, epochs, loss_fn, batch_size=32, test_batch_size=64, local_weight_decay=0.0, slr=1.0, clr=0.1, clr_decay=1.0, clr_decay_type='step', min_clr=1e-12, clr_step_size=1000, device='cuda', log_freq=10, *args, **kwargs)[source]#

Bases: fedsim.fl.fl_algorithm.FLAlgorithm

agg(client_id, client_msg, aggregator, weight=1)[source]#
deploy()[source]#

return Mapping of name -> parameters_set to test the model

Raises

NotImplementedError – abstract class to be implemented by child

optimize(aggregator)[source]#

optimize server mdoel(s) and return metrics to be reported

Parameters

aggregator (Any) – Aggregator instance

Raises

NotImplementedError – abstract class to be implemented by child

Returns

Mapping[Hashable, Any] – context to be reported

receive_from_client(client_id, client_msg, aggregation_results)[source]#

receive and aggregate info from selected clients

Parameters
  • client_id (int) – id of the sender (client)

  • client_msg (Mapping[Hashable, Any]) – client context that is sent

  • aggregator (Any) – aggregator instance to collect info

Raises

NotImplementedError – abstract class to be implemented by child

report(dataloaders, metric_logger, device, optimize_reports, deployment_points=None)[source]#

test on global data and report info

Parameters
  • dataloaders (Any) – dict of data loaders to test the global model(s)

  • metric_logger (Any) – the logging object (e.g., SummaryWriter)

  • device (str) – ‘cuda’, ‘cpu’ or gpu number

  • optimize_reports (Mapping[Hashable, Any]) – dict returned by optimzier

  • deployment_points (Mapping[Hashable, torch.Tensor], optional) – output of deploy method

Raises

NotImplementedError – abstract class to be implemented by child

send_to_client(client_id)[source]#
returns context to send to the client corresponding to client_id.

Do not send shared objects like server model if you made any before you deepcopy it.

Parameters

client_id (int) – id of the receiving client

Raises

NotImplementedError – abstract class to be implemented by child

Returns

Mapping[Hashable, Any] – the context to be sent in form of a Mapping

send_to_server(client_id, datasets, epochs, loss_fn, batch_size, lr, weight_decay=0, device='cuda', ctx=None, step_closure=None, *args, **kwargs)[source]#

client operation on the recieved information.

Parameters
  • client_id (int) – id of the client

  • datasets (Dict[str, Iterable]) – this comes from Data Manager

  • epochs (int) – number of epochs to train

  • loss_fn (nn.Module) – either ‘ce’ (for cross-entropy) or ‘mse’

  • batch_size (int) – training batch_size

  • lr (float) – client learning rate

  • weight_decay (float, optional) – weight decay for SGD. Defaults to 0.

  • device (Union[int, str], optional) – Defaults to ‘cuda’.

  • ctx (Optional[Dict[Hashable, Any]], optional) – context reveived from server. Defaults to None.

Raises

NotImplementedError – abstract class to be implemented by child

Returns

Mapping[str, Any] – client context to be sent to the server

fedsim.fl.algorithms.fedavgm module#

This file contains an implementation of the following paper: Title: “Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification” Authors: Tzu-Ming Harry Hsu, Hang Qi, Matthew Brown Publication date: September 13th, 2019 Link: https://arxiv.org/abs/1909.06335

class fedsim.fl.algorithms.fedavgm.FedAvgM(data_manager, metric_logger, num_clients, sample_scheme, sample_rate, model_class, epochs, loss_fn, batch_size=32, test_batch_size=64, local_weight_decay=0.0, slr=1.0, clr=0.1, clr_decay=1.0, clr_decay_type='step', min_clr=1e-12, clr_step_size=1000, device='cuda', log_freq=10, momentum=0.9, *args, **kwargs)[source]#

Bases: fedsim.fl.algorithms.fedavg.FedAvg

fedsim.fl.algorithms.feddyn module#

This file contains an implementation of the following paper: Title: “Federated Learning Based on Dynamic Regularization” Authors: Durmus Alp Emre Acar, Yue Zhao, Ramon Matas, Matthew Mattina, Paul Whatmough, Venkatesh Saligrama Publication date: [28 Sept 2020 (modified: 25 Mar 2021)] Link: https://openreview.net/forum?id=B7v4QMR6Z9w

class fedsim.fl.algorithms.feddyn.FedDyn(data_manager, metric_logger, num_clients, sample_scheme, sample_rate, model_class, epochs, loss_fn, batch_size=32, test_batch_size=64, local_weight_decay=0.0, slr=1.0, clr=0.1, clr_decay=1.0, clr_decay_type='step', min_clr=1e-12, clr_step_size=1000, device='cuda', log_freq=10, mu=0.02, *args, **kwargs)[source]#

Bases: fedsim.fl.algorithms.fedavg.FedAvg

deploy()[source]#

return Mapping of name -> parameters_set to test the model

Raises

NotImplementedError – abstract class to be implemented by child

optimize(aggregator)[source]#

optimize server mdoel(s) and return metrics to be reported

Parameters

aggregator (Any) – Aggregator instance

Raises

NotImplementedError – abstract class to be implemented by child

Returns

Mapping[Hashable, Any] – context to be reported

receive_from_client(client_id, client_msg, aggregation_results)[source]#

receive and aggregate info from selected clients

Parameters
  • client_id (int) – id of the sender (client)

  • client_msg (Mapping[Hashable, Any]) – client context that is sent

  • aggregator (Any) – aggregator instance to collect info

Raises

NotImplementedError – abstract class to be implemented by child

send_to_server(client_id, datasets, epochs, loss_fn, batch_size, lr, weight_decay=0, device='cuda', ctx=None, *args, **kwargs)[source]#

client operation on the recieved information.

Parameters
  • client_id (int) – id of the client

  • datasets (Dict[str, Iterable]) – this comes from Data Manager

  • epochs (int) – number of epochs to train

  • loss_fn (nn.Module) – either ‘ce’ (for cross-entropy) or ‘mse’

  • batch_size (int) – training batch_size

  • lr (float) – client learning rate

  • weight_decay (float, optional) – weight decay for SGD. Defaults to 0.

  • device (Union[int, str], optional) – Defaults to ‘cuda’.

  • ctx (Optional[Dict[Hashable, Any]], optional) – context reveived from server. Defaults to None.

Raises

NotImplementedError – abstract class to be implemented by child

Returns

Mapping[str, Any] – client context to be sent to the server

fedsim.fl.algorithms.fednova module#

This file contains an implementation of the following paper: Title: “Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization” Authors: Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, H. Vincent Poor Publication date: 15 Jul 2020 Link: https://arxiv.org/abs/2007.07481

class fedsim.fl.algorithms.fednova.FedNova(data_manager, metric_logger, num_clients, sample_scheme, sample_rate, model_class, epochs, loss_fn, batch_size=32, test_batch_size=64, local_weight_decay=0.0, slr=1.0, clr=0.1, clr_decay=1.0, clr_decay_type='step', min_clr=1e-12, clr_step_size=1000, device='cuda', log_freq=10, *args, **kwargs)[source]#

Bases: fedsim.fl.algorithms.fedavg.FedAvg

receive_from_client(client_id, client_msg, aggregation_results)[source]#

receive and aggregate info from selected clients

Parameters
  • client_id (int) – id of the sender (client)

  • client_msg (Mapping[Hashable, Any]) – client context that is sent

  • aggregator (Any) – aggregator instance to collect info

Raises

NotImplementedError – abstract class to be implemented by child

fedsim.fl.algorithms.fedprox module#

This file contains an implementation of the following paper: Title: “Federated Optimization in Heterogeneous Networks” Authors: Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith Publication date: [Submitted on 14 Dec 2018 (v1), last revised 21 Apr 2020 (this version, v5)] Link: https://arxiv.org/abs/1812.06127

class fedsim.fl.algorithms.fedprox.FedProx(data_manager, metric_logger, num_clients, sample_scheme, sample_rate, model_class, epochs, loss_fn, batch_size=32, test_batch_size=64, local_weight_decay=0.0, slr=1.0, clr=0.1, clr_decay=1.0, clr_decay_type='step', min_clr=1e-12, clr_step_size=1000, device='cuda', log_freq=10, mu=0.0001, *args, **kwargs)[source]#

Bases: fedsim.fl.algorithms.fedavg.FedAvg

send_to_server(client_id, datasets, epochs, loss_fn, batch_size, lr, weight_decay=0, device='cuda', ctx=None, *args, **kwargs)[source]#

client operation on the recieved information.

Parameters
  • client_id (int) – id of the client

  • datasets (Dict[str, Iterable]) – this comes from Data Manager

  • epochs (int) – number of epochs to train

  • loss_fn (nn.Module) – either ‘ce’ (for cross-entropy) or ‘mse’

  • batch_size (int) – training batch_size

  • lr (float) – client learning rate

  • weight_decay (float, optional) – weight decay for SGD. Defaults to 0.

  • device (Union[int, str], optional) – Defaults to ‘cuda’.

  • ctx (Optional[Dict[Hashable, Any]], optional) – context reveived from server. Defaults to None.

Raises

NotImplementedError – abstract class to be implemented by child

Returns

Mapping[str, Any] – client context to be sent to the server

Module contents#