Insights

What the Models Find When They Arrive: From ChatGPT to Agentic Ecosystem

AI models are already extraordinarily capable. The binding constraint on value creation has shifted from model quality to organisational readiness. Three pillars define an agent-ready environment: memory, feedback loops, and a map of context.

In early 2026, a single trading session wiped 12 to 14 percent off the market capitalisation of several large Real Estate services firms after a series of AI-related announcements. While focus is on the technology, more interesting is that the market might be repricing readiness, instead of capabilities.

Based on our own work building AI into real estate data, readiness is key. Three independent lines of evidence have converged on the same conclusion: AI models are already extraordinarily capable, and the binding constraint on value creation has shifted from model quality to environments designed for AI agents. For simplicity, I'll call the resulting organisational infrastructure an agentic ecosystem.

Software development has been the first domain transformed, because code is already structured, versionable, and testable. Yet, the principles generalise. The agentic wave that reshaped engineering in 2025 is a preview of what every knowledge-intensive function will face. The evidence below draws from software as it is where the data exists, but the conclusions apply far beyond it. In my own academic research, I have applied the same approach to literature review, data analysis, writing, and editing. The results confirm the pattern: environmental design determines output far more than model selection.

Models Are Already Extraordinary

In February 2026, Nicholas Carlini at Anthropic published an experiment in which 16 AI agents, running in parallel with no human orchestrator, built a fully functional C compiler from scratch (Carlini, 2026). 100,000 lines of code, three architectures, $20,000, two weeks, zero human-written lines. The result matters less for its volume than for its consistency: every component must interoperate, meaning agents maintained coherence across a growing system without central supervision.

Around the same time, Ryan Lopopolo at OpenAI described a team of three engineers that shipped a production product with zero manually-produced output (Lopopolo, 2026). One million lines across 1,500 contributions over five months, with throughput increasing as the team grew to seven. The bottleneck was never the model.

These are, notably, experiments run inside the laboratories that built the models. But the architectural lessons — structured verification, navigable knowledge, decision capture — do not depend on privileged access. The production systems show that the models can do the work. The interesting question is why most organisations achieve results nowhere near this level.

Beyond "Use ChatGPT"

The most common organisation's response has been additive: give teams a chatbot, measure adoption by seat count, look for productivity gains at the margins. This is installing electricity and using it only to power the lights. The data bears this out. Workers using AI individually report halved task times and tripled productivity, yet organisational-level gains remain small (Mollick, 2025). The bottleneck has migrated from intelligence to institutions, institutions move at institution speed.

The evidence points somewhere else: the processes themselves need to change. The best practices every organisation already knows (documentation, verification, knowledge management, decision traceability) can now be implemented at a level of consistency never previously achievable. The paradigm shift lies in taking those practices seriously, mechanically, at scale.

One category of organisation asks: "How can AI help with what we already do?" Another asks: "What would we build if every unit of knowledge work could be verified, traced, and compounded?"

Agentic Environment Over Model

Both Carlini and Lopopolo arrive at the same answer independently. The critical variable is the environment, far more than the agent itself.

Carlini's agents succeed because verification infrastructure, task coordination, and feedback loops are structured so agents can orient without human intervention. When signals are ambiguous, agents stall. His most productive effort went into the verification infrastructure: machine-readable outputs, limited noise, decomposition into independent subsets. The environment steered agents far more effectively than instructions ever could. The principle generalises: in any domain, the quality of feedback determines the quality of autonomous work.

Lopopolo's team tried the opposite first: a single comprehensive instruction file. It was too long, too detailed, and stale within days. Their solution was a navigable map: a short index as table of contents, deeper sources versioned, cross-linked, and mechanically validated. Agents learn where to look rather than being overwhelmed. Anything an agent can't access effectively doesn't exist. Every company has essential knowledge trapped in formats agents cannot reach. Making it explicit and navigable is the highest-leverage investment available.

Meanwhile, Gupta and Garg at Foundation Capital (writing as investors, not engineers) identified the same pattern at the enterprise level (Gupta and Garg, 2025). Current systems of record store what happened but do not capture *why*: the inputs, the policy applied, the rationale, the approval chain. Companies that capture these *decision traces* will own a layer of organisational truth no incumbent system possesses. Automatic versioning of decisions (not just outcomes) becomes the mechanism by which institutional knowledge builds rather than decays.

Three independent sources, yet one conclusion: the scarce resource is environmental readiness.

Three Pillars

At a sufficiently high level of abstraction, environments optimised for AI agents and environments optimised for rigorous governance turn out to be the same environment. Three pillars define both.

Memory. The mechanism is automatic versioning: every change tracked, attributable, reversible. An organisation should never solve the same problem twice. Every decision, solution, and convention captured once and made retrievable — knowledge as infrastructure that grows alongside well-structured work, rather than a separate deliverable maintained in parallel. Each answered question, properly captured, makes the next similar question cheaper.

Feedback Loops. Without mechanical verification, memory is aspirational. Feedback must be mechanical rather than relying on human attention: validation systems with machine-readable outputs, quality gates that agents can parse and act on. Constraints operate as multipliers. A rule that enforces a boundary prevents an entire category of mistakes at zero marginal cost, scaling automatically with volume. Drift is the default; agent-scale output means agent-scale drift unless caught continuously.

Map of Context. A new employee and a strategic decision-maker should be served by one navigable structure, not divergent onboarding decks and strategy documents. Concise overview first, clear pathways to depth, resolution on demand. The organisation gains a source of truth that scales with complexity. Modularity enables parallel work; entanglement makes every task a whole-organisation task.

The three are not independent. Memory without feedback loops is unreliable. Feedback without a map fires blindly. The system works as a whole or not at all.

The Human Role

The shift bypasses "supervision" entirely. It moves from *producing work* to *designing the system within which work gets produced*. In both experiments, humans designed verification harnesses, knowledge architectures, governance models, and enforceable rules. Both teams spent the majority of their effort on environmental design.

AI makes this the default mode of contribution for a much larger share of the workforce. It demands architectural skills rather than production skills: making tacit knowledge explicit, defining constraints without micromanaging, building feedback loops that catch drift before it accumulates.

The instinct is to layer additional tools on top. The opposite is correct. The foundational capabilities (versioning, review, traceability, conflict resolution) already exist. The work is structuring what the organisation already has. Each decision captured makes the next one cheaper. Each constraint enforced mechanically prevents drift.

What This Means

Capability bottlenecks have not vanished; they have become secondary. Verification in law or strategy is harder than in software. Feedback loops can encode bias as easily as they prevent error. The principles are not a guarantee; they are a necessary condition. But the alternative of layering AI onto unrestructured processes has already shown its ceiling.

Consider a simple exercise. Drop an agent into your organisation today. Can it find what it needs? Can it verify its own output? Can it learn from last quarter's decisions without asking someone? If the answer is no for any of these, the constraint is not the model but the environment built around it.

This holds for research, compliance, finance, operations, legal, and strategy alike. Every organisation will have access to the same models. What separates them is what those models find when they arrive.

References

Carlini, N. (2026). "Building a C compiler with a team of parallel Claudes." Anthropic Research Blog

Gupta, J. and Garg, A. (2025). "AI's trillion-dollar opportunity: Context graphs." Foundation Capital

Lopopolo, R. (2026). "Harness engineering: leveraging Codex in an agent-first world." OpenAI Engineering Blog

Mollick, E. (2025). "The Shape of AI: Jaggedness, Bottlenecks and Salients." One Useful Thing



Feb 19, 2026

Start today

Unlock the potential of your business with our institutional-grade real estate data. Transform your workflows and achieve new heights today.

Newsletter

Receive Quanthome's latest news

Quanthome is the Swiss real estate data platforms, connecting building-level data, fund analytics and ESG insights into one unified source. Trusted by institutional investors, banks, and asset managers, our AI-powered tools bring clarity, transparency and foresight to real estate decisions – from a single building to an entire portfolio.

Quanthome SA,
Avenue Mon-Repos 24
1005 Lausanne

+41 (0)21 312 16 93

contact@quanthome.com

© 2022- 2025 Quanthome SA

Start today

Unlock the potential of your business with our institutional-grade real estate data. Transform your workflows and achieve new heights today.

Newsletter

Receive Quanthome's latest news

Quanthome is the Swiss real estate data platforms, connecting building-level data, fund analytics and ESG insights into one unified source. Trusted by institutional investors, banks, and asset managers, our AI-powered tools bring clarity, transparency and foresight to real estate decisions – from a single building to an entire portfolio.

Quanthome SA,
Avenue Mon-Repos 24
1005 Lausanne

+41 (0)21 312 16 93

contact@quanthome.com

© 2022- 2025 Quanthome SA

Start today

Unlock the potential of your business with our institutional-grade real estate data. Transform your workflows and achieve new heights today.

Newsletter

Receive Quanthome's latest news

Quanthome is the Swiss real estate data platforms, connecting building-level data, fund analytics and ESG insights into one unified source. Trusted by institutional investors, banks, and asset managers, our AI-powered tools bring clarity, transparency and foresight to real estate decisions – from a single building to an entire portfolio.

Quanthome SA,
Avenue Mon-Repos 24
1005 Lausanne

+41 (0)21 312 16 93

contact@quanthome.com

© 2022- 2025 Quanthome SA