This repository contains a single-file, self-contained implementation of a GPT-style transformer language model called MicroGPT. It covers the full pipeline from tokenization through model architecture, training loop, and text generation inference, all within one compact Python file. Key design decisions include minimalism and educational clarity — the implementation deliberately avoids external deep learning framework abstractions beyond the core tensor operations, making every component visible and understandable in one place. This project is ideal for developers, students, and researchers who want to understand how GPT-style models work at a fundamental level without navigating a large codebase, or who need a lightweight starting point for experimentation with transformer architectures.