AI basics
Very, very basic — with tiny exercises on every step.
We’ll start with the simplest mental model (inputs → outputs), then build up: data, learning, tokens, vectors (embeddings), and finally how modern LLM apps work.
AI as a mapping
The simplest mental model: learn a mapping from inputs to outputs.
Data teaches patterns
Rules are written by people. ML rules are learned from examples.
If you can collect examples, a model can learn the pattern. If you can’t define the pattern clearly, rules often break.
How learning works (tiny loop)
Predict → compare → adjust → repeat.
Tokens
LLMs don’t read “words” — they read tokens.
A tokenizer turns text into a sequence of IDs. Different models tokenize differently, but the idea is the same: text → pieces.
Vectors (Embeddings)
Meaning is represented as numbers.
An embedding is a vector (a list of numbers). “Similar meaning” ends up closer together in vector space.
LLMs: next-token prediction
The core trick: predict the next token, many times.
Given a context, the model outputs a probability for the next token. Repeating that generates text.
Modern AI apps (LLM + RAG + tools)
Models are powerful, but they still need fresh data and actions.