The Quiet Revolution: How Federated Learning Is Reshaping Privacy-Preserving AI
Main article
Abstract
Federated learning emerged from a 2017 Google paper as an engineering solution to a narrow problem: how do you improve a mobile keyboard's next-word prediction without uploading users' typing data to a central server? Seven years later, it has become a genuine paradigm shift in how the AI research community thinks about the relationship between data, privacy, and model training. This perspective piece argues that federated learning's deepest contribution is not technical but conceptual — it has changed what counts as acceptable in AI system design and has legitimised a set of questions about data sovereignty that were previously considered beyond the scope of machine learning research. We trace this shift through three domains where federated learning has had disproportionate impact: healthcare, mobile devices, and financial services. We also examine the significant technical limitations that remain unresolved — statistical heterogeneity, communication efficiency, and Byzantine robustness — and argue that addressing them will require the field to engage more seriously with systems research than it currently does. The piece concludes with a reflection on what 'privacy-preserving AI' actually means and why the term is frequently misused in ways that obscure more than they reveal.
