Andrej Karpathy has released Nanochat, a groundbreaking project demystifying large language model (LLM) training by showing anyone how to build and train a ChatGPT-like chatbot for about $100. This project collapses the entire LLM pipeline into an accessible experiment, allowing learners to create their own 561M parameter model in just four hours using a single, clear code repository.
What Makes Nanochat a Game-Changer?
Nanochat is a self-contained, 8,000-line “ChatGPT-clone” pipeline that reveals every step of the process, from the tokenizer to the web UI. Unlike complex frameworks like PyTorch or Hugging Face that hide details behind abstractions, Nanochat is designed to be read end-to-end. It uses a dependency-light stack – Python/PyTorch for the model, a 200-line Rust tokenizer, and simple HTML/JS for the UI – making the entire process transparent and easy to understand.
A Look Inside the Nanochat Training Recipe
Nanochat provides a complete, accessible framework for training a language model from scratch. It uses a multistage process including data tokenization, pretraining on educational web text, and fine-tuning on conversational data, all managed through simple scripts rather than complex configuration files.
The project exposes the same multi-stage recipe used by frontier labs, but in a highly simplified format:
- Tokenizer: A Rust-based Byte Pair Encoding (BPE) tokenizer with a 65,536-word vocabulary.
- Pre-training: Training on 33 billion tokens from the FineWeb-Edu dataset.
- Mid-training: Continued training on SmolTalk conversations, supplemented with MMLU and GSM8K data.
- Supervised Fine-Tuning (SFT): Fine-tuning on tasks from ARC-E/C, GSM8K, and HumanEval.
- Optional Reinforcement Learning: An optional GRPO loop for refining performance on math problems.
- Inference Engine: A hand-rolled engine with KV-caching and a one-shot “report-card” script that prints performance scores.
Performance vs. Price: How Good Is a $100 LLM?
For its minimal cost and training time, Nanochat delivers surprisingly capable results. A 24-hour training run, which still uses less than 1/1,000th of the compute of GPT-3-small, achieves respectable benchmark scores that provide a legitimate baseline for hobby projects and classroom demos.
- 4 hours ($100): A coherent, chatty 561M parameter model.
- 12 hours ($300): Performance that surpasses GPT-2 on CORE benchmarks.
- 24 hours ($600): Reaches ~40% on MMLU, ~70% on ARC-Easy, and ~20% on GSM8K.
Run It Anywhere: From Cloud GPUs to Raspberry Pi
While training requires cloud GPUs (e.g., an 8xH100 node), the final 561M checkpoint is remarkably portable. The trained model is small enough to run inference at interactive speeds on a device as humble as a $35 Raspberry Pi 5. The repository includes both a command-line interface and a simple web UI, allowing you to host your personal ChatGPT-style assistant locally.
Community Adoption and Future Directions
The AI community has responded with tremendous enthusiasm. Since its launch, the Nanochat GitHub repository has amassed over 19,000 stars and 1,900 forks, sparking lively discussions on data curation, performance optimization, and new features.
Karpathy frames Nanochat as a “strong, hackable baseline” and plans to integrate it into the free LLM101n course as a practical capstone project. As a fully open, MIT-licensed stack, it invites researchers and hobbyists to swap optimizers, test new architectures, or bolt on retrieval systems, with many already sharing their results.
The project’s low entry cost is a major draw, but its true, lasting contribution may be its transparency. By letting learners trace every tensor from dataset to dialogue, Nanochat turns large language models from a black box into a LEGO set.
 
			 
					










 
							 
							




