Federated Learning is a distributed machine learning approach where models are trained on data that never leaves the device. Instead of uploading user data to a central server, the model is sent to the device, trained locally, then model updates are sent back and aggregated. This approach preserves privacy because servers never see raw user data. Differential Privacy adds a mathematical layer: noise is injected into model updates such that an adversary with access to the model cannot infer whether any individual person's data was used for training. Together, FL and DP create systems where privacy is not a trust issue (depend on server security) but a mathematical guarantee (impossible to reverse-engineer).