askOdin — AI Judgment Infrastructure for Capital Allocation

The Layer Beneath the Agents

The agent boom is solving the execution layer. The judgment layer is the gap LPs will start asking about next.

By LOK YekSoon · · 4 min read

Every venture deck on my desk this quarter has the word “agent” in it. Every fund I speak to is either deploying agents internally or evaluating someone who does. The category has gone from research curiosity to default architecture in roughly eighteen months.

This is not the contrarian part of the essay. Agents are real. They work. They will compound in capability through the rest of this decade.

The contrarian part is what the boom is quietly creating underneath itself.

The execution layer is solved. The judgment layer is not.

An agent is, structurally, an execution system. It takes an instruction, decomposes it into steps, calls tools, observes outcomes, and iterates. The closer you look, the more clearly agents resemble a faster, cheaper, more autonomous version of an operations team.

That is genuinely useful. It is also genuinely insufficient for capital allocation.

Allocation is not an execution problem. It is a judgment problem. The question a partner asks before writing a cheque is not “can this be processed faster?” It is “is this right, can I defend it, and will the verdict survive the next IC meeting and the next LP review?” An agent that screens five hundred decks a quarter does not answer that question. It multiplies it.

Every additional agent in a fund’s stack adds probabilistic output to a workflow that ultimately requires a deterministic verdict. The faster the agents run, the larger the unaudited surface area becomes.

The auditability gap is widening, not closing

Pick any other industry where capital is allocated at scale and you find a deterministic substrate underneath the workflow. Credit has underwriting. Insurance has actuarial science. Public markets have GAAP and SOX. Trading has clearing and settlement.

Venture has none of this. It has narrative, network, and conviction — none of which are reproducible across analysts, across firms, or across cycles. The asset class has operated on judgment that lives inside individual heads, and the absence of an audit layer was tolerable when the volume of decisions was small and the LPs were patient.

Agents break that tolerance. When a GP tells an LP “our agent screened five hundred deals this quarter and surfaced these twenty,” the next question is methodology. Show me the rules. Show me reproducibility. Show me what changed between the deal you funded and the seventeen you killed. No probabilistic system can answer those questions, because the system itself does not know what it did.

The diligence crisis I wrote about earlier this year was already structural. The agent boom is making it acute.

Agents have no native referee

The second-order problem is internal to the agent stack itself. As funds deploy agents in sequence — a sourcing agent, a screening agent, a diligence agent, a memo agent — the outputs of one agent become the inputs of the next. Errors compound. Disagreements between agents have no arbiter. Contradictions between an agent’s verdict and the underlying data room have no resolution mechanism.

The instinct of the agent ecosystem is to solve this with another agent. An “orchestrator,” a “judge agent,” a “supervisor.” This is a category error. Adding a probabilistic referee to a probabilistic system does not produce determinism. It produces a more expensive probabilistic system with one more layer of failure modes.

Arbitration requires a substrate the agents can appeal to — a set of rules they did not author, applied consistently across every input, producing reproducible verdicts that can be inspected after the fact. That substrate is not another agent. It is infrastructure.

The Visa analogy

The right mental model for what comes next is not “smarter agents.” It is the relationship between application-layer software and the deterministic rails underneath it.

Visa does not compete with the bank’s mobile app. It is the layer below — the rails that settle every transaction the app initiates. Moody’s does not compete with the trading desk. It is the rating substrate that every credit decision references. GAAP does not compete with the CFO’s spreadsheet. It is the framework the spreadsheet has to reconcile against.

Capital allocation needs the same layering. The agents are the application surface — the things that touch the deck, the data room, the founder. Underneath them sits the deterministic judgment layer — protocols like RUNE — that turn probabilistic output into a defensible verdict. One number a partner can sign their name next to. One audit trail an LP can review. One reproducible methodology an IC can interrogate.

That layer is what we are building.

What the next five years actually look like

The funds that win the next cycle will not be the ones with the most agents. They will be the ones whose agents sit on top of a judgment layer that gives them auditability, reproducibility, and defensibility — the three properties LPs are quietly starting to demand and that no agent framework can provide on its own.

The infrastructure question becomes: what authoritative external signal does an agent reference when it is asked to make a judgment call? Today the answer is “whatever the underlying foundation model decides.” In three years that answer will not survive contact with an LP’s compliance team. The answer has to be a deterministic feed — a Clarity Score, a structured verdict, a methodology document — that the agent can subscribe to the way a trading system subscribes to a price feed.

This is the quiet structural shift the agent boom is producing. Not a winner-take-all race among agent frameworks. A bifurcation between the application layer (loud, crowded, probabilistic) and the infrastructure layer (quiet, sparse, deterministic).

The closing observation

Agents do more. Judgment decides what mattered.

The era of agentic AI is not the era of artificial judgment. It is the era when the absence of judgment infrastructure becomes the most expensive gap in capital markets. Every probabilistic output produced by an agent has to eventually be reconciled against a deterministic verdict, or the entire workflow collapses under the weight of its own unaudited volume.

The market is loud right now because the application layer always is. The interesting work is happening one layer down.


YekSoon Lok is the Founder & CEO of askOdin, building the AI Judgment Infrastructure for private capital.

Explore how tier-1 funds deploy deterministic diligence on the Clarity platform, or read the VC Diligence Protocol for the operating manual.