Learn AI Fundamentals
Interactive lessons, flashcards, and quizzes on the core concepts behind every AI tool — RAG, embeddings, transformers, hallucination, and more.
What is RAG (Retrieval-Augmented Generation)?
RAG combines a language model with a retrieval system, letting the AI search a knowledge base before answering — reducing hallucinations and keeping responses up to date.
LLM Tokenization Explained
Tokens are the atomic units LLMs process — not words, but subword pieces. Understanding tokens helps you write better prompts and manage API costs.
Fine-tuning vs Prompting — Which Should You Use?
Prompting is fast and free; fine-tuning permanently adjusts model behavior. Knowing when to choose each saves you thousands of dollars and weeks of work.
AI Agent Architecture — How Autonomous AI Works
AI agents combine LLMs with tools, memory, and planning loops to complete multi-step tasks autonomously. Understanding the architecture helps you choose the right agentic product.
What are Embeddings? (Vector Representations)
Embeddings convert text, images, or audio into lists of numbers that capture semantic meaning. They power semantic search, recommendations, and RAG systems.
Transformer Architecture Basics
Transformers are the neural network architecture behind every modern LLM. Self-attention lets the model weigh how relevant each word is to every other word — enabling long-range understanding.
Context Window Explained — What LLMs Can "Remember"
The context window is the total amount of text an LLM can see at once — both your input and its output. Understanding it helps you avoid "forgetting" issues and use AI tools more effectively.
What is Multimodal AI?
Multimodal AI processes multiple types of input — text, images, audio, and video — in a single model. GPT-4o and Gemini 1.5 Pro are leading examples.
AI Hallucination — Why LLMs Make Things Up
AI hallucination is when a language model generates plausible-sounding but factually incorrect information. Understanding why it happens helps you use AI tools more safely.
What is Function Calling / Tool Use in LLMs?
Function calling lets LLMs trigger external actions — searching the web, running code, querying databases — by outputting structured JSON that your application executes.