The 5 Slides Generative AI Theoretical Training Course
We recently delivered a generative AI training for a customer in India. The goal was simple: explain what generative AI is, how it works, and what to watch out for β all in 5 slides.
The theoretical part takes around 2 hours. An additional 6 hours of hands-on practical training follows. For large audiences, 2-3 days of reinforced training activities are needed to work directly with team members.
Transparency is one of our core values at Lab34. We share the training materials openly β you can download the full slide deck below.
Download the training slides (PDF)
What is AI? What is Generative AI?
AI is software that performs tasks typically requiring human intelligence. Generative AI is a subset that creates new content β text, images, code β based on patterns learned from training data.
Is it intelligent? Not in the way we are. It has no understanding, no awareness. It recognizes patterns and produces outputs that look coherent. What we know: it is powerful and useful. What we feel about it: that depends on who you ask.
Slide 1: What is a Token
A token is the basic unit an AI model works with. It is not a word β it is a chunk of text. A word like βunderstandingβ might be split into multiple tokens. Numbers, punctuation, and spaces are also tokens. Everything the model reads and writes is tokenized first.
Slide 2: Predict the Next Token
This is the core mechanic. Given a sequence of tokens, the model predicts the most probable next token. Then the next. Then the next. That is all it does. The quality of its output comes from the scale of its training data and the patterns it has internalized.
Slide 3: Families of Models
There are several major families of models available today:
- Google Gemini β Gemini Pro, Gemini Ultra
- Anthropic Claude β Haiku, Sonnet, Opus
- OpenAI GPT β GPT-4, GPT-4o
- Meta LLaMA β Open-weight models
- Mistral β European open-weight models
Each family has different strengths, pricing, and context window sizes. The choice depends on the use case.
Slide 4: The Cut-off Date + The Context
Every model has a knowledge cut-off date β the point in time where its training data ends. It does not know anything that happened after that date unless you provide it.
Context is what you give the model right now: your prompt, your documents, your instructions. The model combines its trained knowledge with the context you provide to generate a response.
Slide 5: Context Window β Know Your Limits
The context window is the total amount of tokens the model can handle in a single conversation (input + output combined). Here is how to think about utilization:
- ~40% usage β Safe zone. The model performs well.
- ~60% usage β Dangerous. Quality starts to degrade, the model may lose track of earlier information.
- ~80% and above β No-go. The model will drop context, hallucinate, or produce unreliable output.
Watch Out Your Context Window
This is the single most practical takeaway from the training. Monitor your context usage. Keep conversations focused. Split large tasks into smaller ones. Do not assume the model remembers everything you gave it β especially as you approach the limits.