Stress testing (adversarial testing) is the systematic process of evaluating machine learning models under extreme, unusual, or adversarial conditions to identify robustness gaps, failure modes, and out-of-distribution vulnerabilities. Instead of testing on clean data similar to training data, stress testing deliberately uses perturbed inputs (noise, occlusions, adversarial attacks, rare examples) to find where the model breaks. Stress testing ranges from simple (adding Gaussian noise to images) to sophisticated (generating targeted adversarial examples that fool the model with minimal perturbation). The goal is to understand real-world risks before deploying: a model that passes 95% accuracy on clean test data might fail catastrophically on snow-covered roads or intentionally manipulated inputs.