Ronak Upadhyaya

AGENTS CROSS THE HAYEKIAN FRONTIER
AI agents and the acquisition of situated, tacit knowledge
In his 1945 essay The Use of Knowledge in Society, Friedrich Hayek argued that the central economic problem had been persistently misunderstood. The challenge was not how to allocate resources given perfect information, but how to make use of knowledge that exists nowhere in totality, only in fragments distributed across millions of individual minds. The knowledge that matters most for economic coordination is local, dispersed, often tacit, and bound to particular circumstances of time and place.
Friedrich Hayek
The factory foreman was Hayek's archetype. The foreman knows how to keep machines running, how to respond when they break, how to manage the flow of materials and labor so that production continues smoothly. This knowledge is not secret, and it cannot be withheld. It is simply the kind of knowledge that resists centralization because it is generated through proximity to the work itself, legible only to someone who inhabits that environment and learns its structure from within.
For most of computing history, software could not acquire this kind of knowledge. What makes AI agents structurally different is not that they can act, but that they can come to know. They can embed themselves in particular environments and accumulate the situated knowledge that makes intelligent action possible in those environments.
The Explicitness Prerequisite
Traditional software operated only on knowledge that humans made explicit in advance. Database schemas had to be designed. Business rules had to be written. Every entity, every property, every process the software could reason about had to be specified by someone who understood the domain and could translate that understanding into formal structure. This was enormously productive, but it had a hard boundary. The vast reservoir of situated, tacit, circumstantial knowledge that Hayek described remained inaccessible. It lived in the heads of the people who possessed it, expressed as judgment, habit, and feel rather than as formalized data.
A pre-trained model acquires general knowledge and general reasoning capacity. It can parse ambiguous instructions, weigh competing considerations, and generate coherent responses across a wide range of domains. But general reasoning, however powerful, is not situated reasoning. There is a difference between knowing how contracts work in the abstract and knowing how this firm, with this client, under this partner's preferences, handles contracts in practice. The former can be learned from text corpora. The latter can only be learned by inhabiting a specific context and reading its accumulated artifacts.
Accumulation Through Embedding
The phenomenon appears wherever an agent inhabits a specific environment long enough to learn its structure. Consider Cowork, a desktop agent that operates across a person's local files and applications. Before it can help reorganize a project folder or draft a memo in the right voice, it must first discover how that person actually works. It reads documents, notices that quarterly reports follow a certain structure, learns that project files are organized by client name, infers that meeting notes use a format no one documented but everyone follows. None of this knowledge exists in any public index. It is embedded in the accumulated patterns of one person's practice, legible only to something that inhabits that environment and learns to read its artifacts.
The same dynamic operates at the scale of an institution. Harvey, embedded in a law firm, learns which precedents partners favor, how a specific client's risk tolerance shapes negotiation strategy, and the institutional memory about opposing counsel's tendencies. None of this is written down in a handbook. It exists in the accumulated habits, preferences, and judgment calls of the firm's practitioners, legible only to something that has spent enough time reading their work product to internalize the patterns.
Sierra illustrates a different surface of the same phenomenon. Its agents operate within a company's customer environment, handling inquiries across chat, phone, and other channels. Over the course of sustained interaction, the agent discovers the product's common failure modes, the edge cases in the return policy, and the specific language that resolves frustration most effectively for that brand's customer base. This knowledge is not available in any training manual or FAQ. It is generated through proximity to actual customers, each conversation revealing something about what this particular company's users need and how they respond.
Crucially, Hayek's factory foreman did not learn the factory floor in a single shift. His knowledge accreted over thousands of interactions, compounding until his understanding of that specific environment became difficult to replicate or transfer. The same temporal dimension applies to agents. An agent that retains what it has learned from prior interactions, carrying context across sessions, channels, and systems, compounds its situated knowledge in a way that a stateless system never can. Each conversation enriches the next, and the accumulation itself becomes a source of value that no competitor can reproduce simply by deploying a stronger base model.
What unites these cases is that the agent is acquiring knowledge that is constitutively local, knowledge that exists only in the specific context it inhabits. This is Hayek's factory foreman in digital form. The foreman's knowledge was valuable precisely because it was particular, unrepeatable, and inaccessible to anyone who had not spent time on that specific factory floor. The knowledge an agent accumulates about a codebase, a firm's practices, or a company's customers has the same character. It is generated through proximity, not retrieved from a general corpus.
Intelligence and Situation
Hayek's deepest insight was that the most economically valuable knowledge resists centralization, not because it is secret but because it is situated. For most of history, only humans could acquire it. Agents cross a frontier that software could not previously reach. They do not access a comprehensive index of the world's information. They embed themselves in particular environments and learn those environments' structure from within. The stronger the underlying model, the deeper this situated learning can go, because a more capable reasoner is a better reader of the particular. Intelligence without situation remains incomplete. Agents are the first software systems that can go out and find their situation for themselves.