GPT (Generative Pre-trained Transformer) family includes GPT-1 through GPT-4 and their variants. All use the transformer architecture (attention-based) for language modeling. Trained on massive text corpora, they learn to predict next tokens. Once trained, they can be used for: text generation, translation, summarization, question-answering, code generation, reasoning. Understanding GPT family = understanding modern AI. From GPT-1 (2018) to GPT-4 (2023), each iteration demonstrates the power of scale (more data, more compute, larger models).