Rethinking the Cost of Distributed Caches for Datacenter Services
This paper (HOTNETS'25) re-teaches a familiar systems lesson: caching is not just about reducing latency, it is also about saving CPU! The paper makes this point concrete by focusing on the second-order effect that often dominates in practice: the monetary cost of computation. The paper shows that caching --even after accounting for the cost of DRAM you use for caching-- still yields 3–4x better cost efficiency thanks to the reduction in CPU usage. In today's cloud pricing model, that CPU cost dominates. DRAM is cheap. Well, was cheap... I guess the joke is on them now, since right after this paper got presented, the DRAM prices jumped by 3-4x! Damn Machine Learning ruining everything since 2018!
Anyways, let's ignore that point conveniently to get back to the paper. Ok, so caches do help, but when do they help the most? Many database-centric or storage-side cache designs miss this point. Even when data is cached at the storage/database cache, an application read still needs to travel there, pay for RPCs, query planning, serialization, and coordination checks.
The paper advocates for moving the caches as close to the application as possible to cut costs for CPU. The key argument is that application-level linked caches deliver far better cost savings than storage-layer caches. By caching fully materialized application objects and bypassing the storage/database read path entirely, linked caches eliminate query amplification and coordination overhead. Across production workloads, this yields 3–4x better cost efficiency than storage-layer caching, easily offsetting the additional DRAM cost. Remote caches help, but still burn CPU on RPCs and serialization. Storage-layer caches save disk I/O but leave most of the query and coordination path intact, delivering the weakest cost savings. The results are consistent across different access skews and read intensities, reinforcing that cache placement dominates cache size.
So that is the gist of the paper. The paper makes two adjacent points. Special cases of this observation, if you will. And let's cover them for completeness.
The first point is rich-object workloads, which is where the most striking evaluation results come from. For services where a single logical read expands into many database queries (e.g., metadata services and control planes), caching fully materialized objects at the application level avoids query amplification entirely. And this yields up to an order-of-magnitude cost reduction versus uncached reads and roughly 2x improvement over caching denormalized key-value representations.
The second result, a negative result, is also important. Adding even lightweight freshness or version checks largely erases these gains, because the check itself traverses most of the database stack. The experiments make clear that strong consistency remains fundamentally at odds with the cost benefits of application-level caching. The paper leaves this as an open challenge, saying that we still lack a clean, low-cost way to combine strong consistency with the economic benefits of application-level caching. I think it is possible to employ leases to trade off an increase in update latency with cost efficiency, and alleviate this problem. Or we could just say: Cache coherence is hard, let's go shopping for CXL!
Discussion
Overall, the paper quantifies something many practitioners intuit but rarely measure. If you care about cost (also monetary cost), move caching up the stack, cache rich objects, and trade memory against CPU burn.
As usual Aleksey and I did a live-reading of the paper. And as usual we had a lot to argue and gripe about. Above is a recording of our discussion, and this links to my annotated paper.
Of course, Aleksey zeroed in on the metastability implications right from the abstract. And yes the metastability implications remained unaddressed in the paper. If you cut costs and operate at lower CPU provisioning (thanks to this cache assist), you are making yourself prone to failure by operating at maximum utilization, without any slack. That means, the moment the cache fails or becomes inaccessible, your application will also get overwhelmed by 2-3x more traffic than it can handle and suffer unavailability or metastability.
I had some reservations about application-level caches. They are undeniably effective, but they lack the reusability and black-box nature of storage-layer caching. Storage-side caching is largely free, transparent, and naturally shared across nodes and applications. Application-level caching, by contrast, requires careful design and nontrivial development effort. It also sacrifices reuse and sharing, since each application must manage its own cache semantics and lifecycle. I wish the paper could discuss these costs and tradeoffs.
Writing, after the introduction section, was repetitive and sub par. Sections 2 and 3 largely repeated the Introduction and wasted space. Then we only had 2 paragraphs of the Theoretical Analysis section, which we actually looked forward to reading. That section is effectively cropped out of the paper, when it makes the core of the arguments for the paper.
The paper's subtitle (see headers on Page 3, 5, 7) is a copy-paste error from the authors' HotNets 2024 paper. There did not seem to be any camera-ready time checks on the paper. To motivate strong consistency, the paper drops several citations in Section 2.3, calling them as recent work. Only 2 out of 6 of these are after 2014. The figures were sloppy as well. Did you notice Figure 6 above? The y-axis are not covering the same ranges, which makes it very hard to compare about the subfigures. The y-axis in Figure 5 uses relative costs, which is also of not much use. It may be that in 2025 most people use LLMs to read papers, but one should still write papers as if humans will read them, past the introduction section, and line by line to understand and check the work.
Finally, here is an interesting question to ponder on. Does this paper conflict with storage disaggregation trend?
At first glance, the paper appears to push against the storage disaggregation trend by arguing for tighter coupling between computation and cached data to meet real-time freshness constraints. In reality, it does not reject disaggregation but warns that disaggregated designs require additional caching above the storage layer. Just storage side caching would not be able to suffice from a latency as well as cost perspective! The paper also points to a hidden cost: freshness guarantees degrade when cache coherence is treated as a best-effort side effect of an eventually consistent pipeline. The paper's message is that disaggregation needs explicit freshness semantics and coordination mechanisms. So maybe a corollary here is that, we should expect disaggregated systems to inevitably grow "stateful edges" over time in order to recover performance and control.


Comments