
AI fluency is now a competitive divide. This guide covers the four fundamentals: models, prompts, context, and tools. It helps you navigate AI's uneven capabilities and work smarter at the jagged frontier.
We have entered the agentic economy. AI systems no longer just respond to questions. They take initiative, research independently, and execute complex tasks with minimal supervision. This shift makes AI fluency a competitive divide. Not in five years. Right now.
Here is the catch: AI capabilities are uneven. Some tasks work brilliantly. Others fail completely. Researchers call this the "jagged frontier", an irregular boundary between what AI handles well and where it stumbles. You will encounter this frontier constantly. These four fundamentals help you navigate it.
I kept coming back to these principles. They proved useful enough to write down and share. Technology evolves quickly, but principles remain stable.
1. Models: Start With Any Leading Model
A leading model is the latest, most capable AI from leading companies. These models represent the current edge of what AI can do. In 2025, that includes Claude, GPT, Gemini, DeepSeek, and equivalents.
Leading models share similar core capabilities. They all handle complex reasoning, long documents, code generation, and nuanced analysis. The differences between them matter, but not at the start.
Here is what matters more: the framework around the model. APIs and integrations determine real-world value. Can you plug it into your workflow? Does it connect to your data sources? These questions matter more than benchmark scores.
Model selection becomes relevant as you advance. Different models excel at different tasks. Some handle code better. Others shine at creative work or long-context analysis. We will cover model selection in depth in a future post.
For now, pick any leading model and focus on the next three fundamentals. They apply universally.
2. Prompts: Information Density Over Length
You have to specify your intent. Models cannot guess what you want.
Think about information per token ratio. A token is roughly a word or word fragment. What matters is clarity per word.
Bad prompt
Can you help me analyze this data and tell me what you think about it and whether there are any insights I should know about
Better prompt
Analyze the attached Q3 sales data. List the three fastest-growing product categories by revenue. One sentence each.
The second prompt is sharper. It specifies the dataset, the exact metric, and the output format. No ambiguity.
The principle: be explicit, not clever. Clear instructions beat fancy phrasing.
A few strategies that help: ask AI to clarify with yes-or-no questions when unsure. Use "explain as if I was five years old" for complex topics. For difficult problems, switch to reasoning models that think through solutions step-by-step. Show examples of what you want; this is called few-shot prompting, and it works remarkably well.
3. Context: You Are the Context Engineer
Context is everything the AI knows when it responds to you. You control this.
Each AI model has a context window, its working memory. It holds your conversation history, uploaded files, and any data you provide. The AI only knows what sits inside this window.
Do not expect AI to magically know your business context or industry specifics. You are the context engineer. Feed it quality inputs.
Bad context:
uploading your entire project folder and asking "what should I do next?"
Better context:
uploading only the three relevant documents, labeled clearly. "Here is our Q3 budget (Budget_Q3.xlsx), last quarter's results (Results_Q2.pdf), and the board's strategic priorities (Strategy_2025.docx). Identify where our spending conflicts with stated priorities."
The second approach eliminates noise. The AI knows exactly what to compare and why.
Garbage in, garbage out. Start new conversations for different topics. When conversations get bloated, summarize and start fresh. Think of context as the information diet you feed the AI. Tight context produces sharp results.
4. Tools: Your Gateway to Agency
Without tools, an AI only knows what you tell it. It cannot look up facts, check current data, or run calculations.
With tools, an AI can take action. Modern models use web search and code execution. They research independently. They write code, run it, see results, and fix errors. They verify claims against current sources.
This is agency. The AI moves from passive responder to active collaborator.
You ask:
Calculate the compound annual growth rate for these investments.
Without tools, the AI writes the formula but cannot verify it works. With tools, it writes code, executes the calculation, and presents verified results. If the code fails, it debugs and tries again.
New frameworks let you connect multiple tools together. You can build custom agents tailored to your workflow. Start with built-in tools. Explore custom integrations as you grow comfortable.
The Jagged Frontier
AI capabilities do not progress in a straight line. They form an irregular boundary. The model that writes compelling analysis might stumble on arithmetic. The one that generates elegant code might miss obvious logical errors.
You cannot predict exactly where the frontier lies for any given task. But you develop intuition through use. Each interaction teaches you when to trust AI output directly and when to verify carefully.
This is why the four fundamentals matter. Good prompts reduce ambiguity at the frontier. Quality context gives the AI better ground to stand on. Tools provide verification when you need certainty.
The professionals who master these fundamentals work faster and deliver better insights. They provide direction, judgment, and taste. AI handles research, analysis, and synthesis. The collaboration works because they navigate the jagged frontier deliberately.
The frontier keeps shifting. These fundamentals remain your map.

Head of Research
27.11.2025