
This post shows what the three pillars for an agent-ready environment look like when a real research question arrives.
In a previous post, I described three pillars that define an agent-ready environment: memory, feedback loops, and a map of context. That post was conceptual. This one is practical. We recently ran a study on seasonal patterns in Swiss listed real estate. Fifteen years of data, eighty-two vehicles, six statistical tests, charts in three languages, a compiled PDF. It took less than an hour of active work. Here is how each pillar made that possible.
What Research-Grade Output Means
First, a note on what the pipeline produces. This matters because the bar defines whether the system is useful.
The output is a finished research document. Data extracted from a structured database. Statistical analysis across six methods. Charts generated in three languages. A compiled PDF with references, methodology, and clean design.
The output is reproducible, multi-language and well designed. The kind of work that used to take a research team several weeks.

Memory: Nothing Solved Twice
Memory means the system never starts from scratch.
Every past project leaves behind reusable components: database connectors, output formats, visualisation standards. The seasonality study reused pieces built months earlier. Each project makes the next one faster. This is compounding applied to infrastructure.
The same applies to data. The system connects to fifteen years of daily prices, net asset values, dividend dates, and fund characteristics. All structured. All queryable in seconds.
A powerful model without structured data produces generic output. Memory is what gives agents something real to work with.
The principle for any organisation: capture what you build. Make past work retrievable. Every solved problem should make the next one cheaper.
Feedback Loops: Verification Without Supervision
Feedback loops let the system check its own work.
In the seasonality study, the pipeline validates outputs at each stage. Data extraction is checked against expected formats. Statistical results are cross-verified across methods. Charts are generated from validated data, not raw inputs.
Without these loops, speed becomes a liability. An agent that produces output fast but cannot verify it creates more problems than it solves.
The principle: build verification into the process, not after it. If a human must review every intermediate output, you have not built a system. You have built a faster way to create more work.
Map of Context: The System Knows Where to Look
A map of context means the system understands your organisation without being told everything at once.
When an instruction says "analyse monthly returns across all listed vehicles," the system knows what that means. It finds the right data, applies the right conventions, formats to the right standards. Not because of a massive instruction file, but because the environment is navigable.
Think of it as a well-organised office. A new employee does not need a thousand-page manual, they need to know where things are. The same applies to agents.
The principle: make your knowledge findable. An agent, like a new colleague, should be able to navigate to the right information without asking someone.
What the Human Does
I described the research question. Chose the angle. Reviewed the statistical choices, adjusted the framing, directed a second pass where needed.
The system handled extraction, computation, formatting, and compilation. Multiple agents worked in parallel while I worked on something else.
The human role shifts from producing to directing. Less writing code, more designing the environment within which work gets done. The researcher becomes faster not because of the model, but because the infrastructure is already in place.
Three Questions Worth Asking
Every organisation will have access to the same models. What separates them is what those models find when they arrive.
Does your system remember what it has already built? Can it verify its own output? Can it find what it needs without asking someone?
If the answer is no to any of these, the constraint is not the model. It is the environment.
Read the full seasonality study here.
10 mars 2026