FLOAT: Federated Learning Optimizations with Automated Tuning
dc.contributor.author | Khan, Ahmad | en |
dc.contributor.author | Khan, Azal Ahmad | en |
dc.contributor.author | Abdelmoniem, Ahmed M. | en |
dc.contributor.author | Fountain, Samuel | en |
dc.contributor.author | Butt, Ali R. | en |
dc.contributor.author | Anwar, Ali | en |
dc.date.accessioned | 2024-05-02T12:34:48Z | en |
dc.date.available | 2024-05-02T12:34:48Z | en |
dc.date.issued | 2024-04-22 | en |
dc.date.updated | 2024-05-01T07:49:17Z | en |
dc.description.abstract | Federated Learning (FL) has emerged as a powerful approach that enables collaborative distributed model training without the need for data sharing. However, FL grapples with inherent heterogeneity challenges leading to issues such as stragglers, dropouts, and performance variations. Selection of clients to run an FL instance is crucial, but existing strategies introduce biases and participation issues and do not consider resource efficiency. Communication and training acceleration solutions proposed to increase client participation also fall short due to the dynamic nature of system resources. We address these challenges in this paper by designing FLOAT, a novel framework designed to boost FL client resource awareness. FLOAT optimizes resource utilization dynamically for meeting training deadlines, and mitigates stragglers and dropouts through various optimization techniques; leading to enhanced model convergence and improved performance. FLOAT leverages multi-objective Reinforcement Learning with Human Feedback (RLHF) to automate the selection of the optimization techniques and their configurations, tailoring them to individual client resource conditions. Moreover, FLOAT seamlessly integrates into existing FL systems, maintaining non-intrusiveness and versatility for both asynchronous and synchronous FL settings. As per our evaluations, FLOAT increases accuracy by up to 53%, reduces client dropouts by up to 78×, and improves communication, computation, and memory utilization by up to 81×, 44×, and 20× respectively. | en |
dc.description.version | Published version | en |
dc.format.mimetype | application/pdf | en |
dc.identifier.doi | https://doi.org/10.1145/3627703.3650081 | en |
dc.identifier.uri | https://hdl.handle.net/10919/118730 | en |
dc.language.iso | en | en |
dc.publisher | ACM | en |
dc.rights | Creative Commons Attribution 4.0 International | en |
dc.rights.holder | The author(s) | en |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | en |
dc.title | FLOAT: Federated Learning Optimizations with Automated Tuning | en |
dc.type | Article - Refereed | en |
dc.type.dcmitype | Text | en |