Federated Learning — Adversarial Scenarios
Simulate malicious clients in federated learning and evaluate aggregation defenses (Krum, median, trimmed mean).
This lab simulates a federated learning (FL) environment with both honest and malicious clients. It evaluates the robustness of aggregation algorithms against poisoning and backdoor attacks, measuring metrics such as global accuracy, attack success rate, and convergence stability.
Tech Stack
- TensorFlow Federated · PySyft · NumPy
- Custom aggregation rules: Krum, Median, Trimmed-Mean
- Visualization of global vs local model drift
Project Highlights
- Simulation Environment: Multiple client setup with configurable attack ratio and learning rate.
- Aggregation Defense: Evaluate robust federated averaging strategies resistant to malicious gradients.
- Metrics: Compare global model accuracy and adversarial success rate across aggregation methods.
- Visualization: Training dynamics plotted over communication rounds for interpretability.
