Supporting our AI overlords: Redesigning data systems to be Agent-first
This Berkeley systems group paper opens with the thesis that LLM agents will soon dominate data system workloads. These agents, acting on behalf of users, do not query like human analysts or even like the applications written by them. Instead, the LLM agents bombard databases with a storm of exploratory requests: schema inspections, partial aggregates, speculative joins, rollback-heavy what-if updates. The authors calls this behavior agentic speculation.
Agentic speculation is positioned as both the problem and the opportunity. The problem is that traditional DBMSs are built for exact intermittent workloads and cannot handle the high-throughput redundant and inefficient querying of LLM agents. The opportunity also lies here. Agentic speculation has recognizable properties and features that invite new designs. Databases should adapt by offering approximate answers, sharing computation across repeated subplans, caching grounding information in an agentic memory store, and even steering agents with cost estimates or semantic hints.
The paper argues the database must bend to the agent's style. But why don't we also consider the other way around? Why shouldn't agents be trained to issue smarter, fewer, more schema-aware queries? The authors take agent inefficiency as a given, I think, in order to preserve the blackbox nature of general LLM agents. After a walkthrough of the paper, I'll revisit this question as well as other directions that occur to me.
Case studies
The authors provide experiments to ground their claims about agentic speculation. The first study uses the BIRD benchmark (BIg Bench for LaRge-scale Database Grounded Text-to-SQL Evaluation) with DuckDB as the backend. The goal here is to evaluate how well LLMs can convert natural language questions into SQL queries. I, for one, welcome our SQL wielding AI overlords! Here, agents are instantiated as GPT-4o-mini and Qwen2.5-Coder-7B.
The central finding from Figure 1 is that accuracy improves with more attempts for both sequential (one agent issuing multiple turns) and parallel setups (many agents in charge at once). The success rate climbs by 14–70% as the number of queries increases. Brute forcing helps, but it also means flooding the database with redundant queries.
Figure 2 drives that point home. Across 50 independent attempts at a single task, fewer than 10–20% of query subplans are unique. Most work is repeated, often in a trivial manner. Result caching looks like an obvious win here.
The second case study moves beyond single queries into multi-database integration tasks that combine Postgres, MongoDB, DuckDB, and SQLite. Figure 3 plots how OpenAI's o3 model proceeds. Agents begin with metadata exploration (tables, columns), move to partial queries, and eventually to full attempts. But the phases overlap in a messy and uncertain way. The paper then explains that injecting grounding hints into the prompts (such as which column contains information pertinent to the task) reduced the number of queries by more than 20%, which shows how steerability helps. So, the agent is like a loudmouth politician who spews less bullshit when his handlers give him some direction.
The case studies illustrate the four features of agentic speculation: Scale (more attempts do improve success), Redundancy (most attempts repeat prior work), Heterogeneity (workloads mix metadata exploration with partial and complete solutions), and Steerability (agents can be nudged toward efficiency).
Architecture
The proposed architecture aims to redesign the database stack for agent workloads. The key idea is that LLM agents send probes instead of bare SQL. These probes include not just queries but also "briefs" (natural language descriptions of intent, tolerance for approximation, and hints about the phase of work). Communication is key, folks! The database, in turn, parses these probes through an agentic interpreter, optimizes them via a probe optimizer that satisfices rather than guarantees exact results. It then executes these queries against a storage layer augmented with an agentic memory store and a shared transaction manager designed for speculative branching and rollback. Alongside answers, the system may return proactive feedback (hints, cost estimates, schema nudges) to steer agents.
The architecture is maybe too tidy. It shows a single agent swarm funneling probes into a single database engine, which responds in kind. So, this looks very much like a single-client, single-node system. There is no real discussion of multi-tenancy: what happens when two clients, with different goals and different access privileges, hit the same backend? Does one client's agentic memory contaminate another's? Are cached probes and approximations shared across tenants, and if so, who arbitrates correctness and privacy? These questions are briefly mentioned in privacy concerns in Section 6, but the architecture itself is silent. Whether this single-client abstraction can scale to the real, distributed, multi-tenant world remains as the important question.
Query Interfaces
Section 4 focuses on the interface between agents and databases. Probes extend SQL into a dialogue by bundling multiple queries together with a natural-language "brief" describing goals, priorities, and tolerance for error. This allows the system to understand not just what is being asked, but why. For example, is the agent is in metadata exploration or solution formulation mode? The database can then prioritize accordingly and provide a rough sample for schema discovery, and a more exact computation for validation.
Two directions stand out for me. First, on the agent-to-system side, probes may request things SQL cannot express, like "find tables semantically related to electronics". This would require embedding-based similarity operators built into the DBMS.
Second, on the system-to-agent side, the database is now encouraged to become proactive, returning not just answers but feedback. These "sleeper agents" inside the DB can explain why a query returned empty results, suggest alternative tables, or give cost estimates so the agent can rethink a probe before execution.
Processing and Optimizing Probes
Section 5 focuses on how to process probes at scale, and what does it mean to optimize them? The key shift is that the database no longer aims for exact answers to each query. Instead, it seeks to satisfice: provide results that are good enough for the agent to decide its next step.
The paper calls this the approximation technique, and presents it as two folds. First, it provides exploratory scaffolding: quick samples, coarse aggregates, and partial results that help the agent discover which tables, filters, and joins matter. Second, it can be decision-making approximation: estimates with bounded error that may themselves be the final answer, because the human behind the agent cares more about trends than exact counts.
Let's consider the task of finding reasons for why profits in coffee bean sales in Berkeley was low this year relative to last. A human analyst would cut to the chase: they would join sales with stores, compare 2024 vs. 2025, then check returns or closures. A schema-blind LLM agent would issue a flood of redundant queries, many of them dead ends. The proposed system here splits the difference: it prunes irrelevant exploration, offers approximate aggregates up front (coffee sales down ~15%), and caches this in memory so later probes can build from it.
To achieve this, the probe optimizer adapts familiar techniques. Multi-query optimization collapses redundant subplans, approximate query processing provides fast sketches instead of full scans, and incremental evaluation streams partial results with early stopping when the trend is clear. The optimizer works both within a batch of probes (intra-probe) and across turns (inter-probe). It caches results and materializes common joins so that the agent's next attempts don't repeat the same work. The optimization goal is not minimizing per-query latency but minimizing the total interaction time between agent and database, a subtle but important shift.
Indexing, Storage, and Transactions
Section 6 addresses the lower layers of the stack: indexing, storage, and transactions. In order to deal with the dynamic, overlapping, and branch-heavy nature of agentic speculation, it proposes an agentic memory store for semantic grounding, and a new transactional model for branched updates.
The agentic memory store is essentially a semantic cache. It stores results of prior probes, metadata, column encodings, and even embeddings to support similarity search. This way, when the agent inevitably repeats itself, the system can serve cached or related results. The open problem is staleness: if schemas or data evolve, cached grounding may mislead future probes. Traditional DBs handle this through strict invalidation (drop and recompute indexes, refresh materialized views). The paper hints that agentic memory may simply be good enough until corrected, a looser consistency model that may suit LLM's temperament.
For branched updates, the paper proposes "a new transactions framework that is centered on state sharing across probes, each of which may be independently attempting to complete a user-defined sequence of updates". The paper argues for multi-world isolation: each branch must be logically isolated, but may physically overlap to exploit shared state. Supporting thousands of concurrent speculative branches requires something beyond Postgres-style MVCC or Aurora's copy-on-write snapshots.
Discussion
The paper offers an ambitious rethinking of how databases should respond to the arrival of LLM agents. This naturally leaves several open questions for discussion.
In my view, the paper frames the problem asymmetrically: agents are messy, exploratory, redundant, so databases must bend to accommodate them. But is that the only path forward? Alternatively, agents could be fine-tuned to issue smarter probes that are more schema-aware, less redundant, more considerate of cost. A protocol of mutual compromise seems more sustainable than a one-sided redesign. Otherwise we risk ossifying the data systems around today's inefficient LLM habits.
Multi-client operation remains an open issue. The architecture is sketched as though one user's army of agents owns the system. Real deployments will have many clients, with different goals and different access rights, colliding on the same backend. What does agentic memory mean in this context? Similarly, how does load management work? How do we allocate resources fairly among tenants when each may field thousands of speculative queries per second? Traditional databases long ago developed notions of connection pooling, admission control, and multi-tenant isolation; agent-first systems will need new equivalents attuned to speculation.
Finally, there is the question of distribution. The architecture as presented looks like a single-node system: one interpreter, one optimizer, one agentic memory, one transaction manager. Yet the workloads described are precisely the heavy workloads that drove databases toward distributed execution. How should agentic memory be partitioned or replicated across nodes? How would speculative branching work here? How can bandwidth limits be managed when repeated scans, approximate sampling, and multi-query optimization saturate storage I/O? How can cross-shard communication be kept from overwhelming the system when speculative branches and rollbacks trigger network communication at scale?
Future Directions: A Neurosymbolic Angle
If we squint, there is a neurosymbolic flavor to this entire setup. LLM agents represent the neural side: fuzzy reasoning, associative/semantic search, and speculative exploration. Databases constitute the symbolic side with schemas, relational algebra, logical operators, and transactional semantics. The paper is then all about creating an interface where the neural can collaborate with the symbolic by combining the flexibility of learned models with the structure and rigor of symbolic systems.
Probes are already halfway to symbolic logic queries: part SQL fragments, part logical forms, and part neural briefs encoding intent and constraints. If databases learn to proactively steer agents with rules and constraints, and if agents learn to ask more structured probes, the result would look even more like a neurosymbolic reasoning system, where neural components generate hypotheses and symbolic databases test, prune, and ground them. If that happens, we can talk about building a new kind of reasoning stack where the two halves ground and reinforce each other.
MongoDB as an AI-First Platform
Document databases offer an interesting angle on the AI–database melding problem. Their schema flexibility and tolerance for semistructured data make them well-suited for the exploratory phase of agent workloads, when LLMs are still feeling out what fields and joins matter. The looseness of document stores may align naturally with the fuzziness of LLM probes, especially when embeddings are brought into play for semantic search.
MongoDB's acquisition of Voyage AI points at this convergence. With built-in embeddings and vector search, MongoDB aims to support probes that ask for semantically similar documents and provide approximate retrieval early in the exploration phase.
How the 2018 Berkeley AI-Systems Vision Fared
Back in 2018, Berkeley systems group presented a broad vision of the systems challenges for AI. Continuing our tradition of checking in on influential Berkeley AI-systems papers, let's give a brief evaluation. Many of its predictions were directionally correct: specialized hardware, privacy and federated learning, and explainability. Others remain underdeveloped, like cloud–edge integration and continual learning in dynamic environments. What it clearly missed was the rise and dominance of LLMs as the interface to data and applications. As I said back then, plans are useless, but planning is indispensable.
Compared with that blue sky agenda, this new Agent-First Data Systems paper is more technical, grounded, and focused. It does not try to map a decade of AI systems research, but rather focuses on a single pressing problem and proposes mechanisms to cope.
Comments