Micrograd is a minimalist scalar-valued automatic differentiation engine accompanied by a small neural network library, created by Andrej Karpathy as an educational project. The core of the library is the Value class in engine.py, which wraps scalar numbers, overloads Python arithmetic operators, and builds a dynamic computation graph during the forward pass. Reverse-mode autodiff (backpropagation) is then performed by calling .backward(), propagating gradients through the graph using closures stored at each node. Built on top of this engine, nn.py provides three composable abstractions — Neuron, Layer, and MLP — that mirror the structure of real deep learning frameworks like PyTorch but in only a few dozen lines of code. This deliberate minimalism is the key design decision: every concept (gradient accumulation, topological sort for backprop, parameter management) is visible and readable without abstraction overhead. The primary audience is learners who want to deeply understand how neural networks and backpropagation work under the hood. It is also useful as a reference implementation or starting point for building more complex autograd systems. The test suite validates gradient correctness by comparing results against PyTorch, making it easy to verify that the tiny engine behaves correctly.