Few-shot learning is the ability to train or adapt a model from very limited labeled data (typically 2-20 examples). Two main approaches: (1) LLM prompting—embed examples in the prompt context and let the model infer from them without weight updates; (2) Meta-learning—train a model on many tasks such that it learns to adapt quickly when given a handful of new examples. Few-shot with LLMs is often called in-context learning (ICL). You provide the model with a few demonstrations (input-output pairs) and it generalizes to new inputs, all within a single inference call. No fine-tuning required.