OSDI 2022: nature is healing

This week I attended the USENIX Symposium on Operating Systems Design and Implementation (OSDI) conference, which was held at San Diego (well, at Carlsbad, CA to be more accurate). A big appeal of OSDI this year was that it was colocated with USENIX ATC. This allowed participants to attend two great conferences for the price of one registration and with one trip. Despite this allure, OSDI22 was still far from its pre-Covid glory and buzz. 900 people attended OSDI20, whereas OSDI22 was attended by 300 people in-person and another 250 as remote.

On a more subjective measure, I found the energy of the conference lacking compared to previous years. There were no lively discussions in the sessions or in the hallways. (Aleksey and I had a very lively discussion about Pat's recent paper, which caused concerned looks from nearby tables.)

There wasn't any specific buzzwords ringing at the conference. But, if I were to give a buzzword, maybe disaggregated systems and far-memory systems would be it. As a hardware improvement on this regard, CXL is new and exciting, and there was a paper on that in ATC. The first session in OSDI was on distributed storage and far memory and included interesting papers. The "metastable failures in the wild paper" created interesting discussions among participants.

There were three keynotes one for each day of the conference. David Tennenhouse's keynote was general, painting a broad vision, and talking about some trends in edge-computing. My understanding is now the term edge-computing is being extended to also cover cloud computing. I don't like vaguely defined terms which expand to become even more meaningless over time, but that is one man's opinion. David's talk advocated taking more risks and doing more foundational work in systems. As promising examples, content-base routing and "A Database-oriented Operating System (DBOS)" projects were cited.

Eric Brewer's keynote was about security of opensource projects. Instead of a technical solution though, a multi-pronged societal and policy-based solution is required to solve this critical problem. No CAP-like grand-conjecture from Eric this year. :-)

It was good to have an in-person conference after 2+ years. I enjoyed catching up with old friends and meeting new people immensely. On the final day, I connected with Jon Howell and learned about the Verus project which is developing a verification framework for Rust-like code for real systems.


Below are some unedited notes from Tennenhouse's keynote. Tomorrow I will try to add notes about some sessions and Brewer's keynote. And I will do a dive deep on a couple of interesting papers in the coming days. The papers from OSDI and ATC are already freely available. Presentations and videos will be made available in a couple months.

Keynote from David Tennenhouse: Surprise inspired networking

David started with broad comments about security, machine learning, and blockchains. Rather than the incremental attack-resolution model prevalent in the security field, David called out the need for getting people to think systematically about security. He mentioned that the success of deep learning shouldn't lead to losing curiosity about how the results work, and potential  alternative solutions. Finally, he mentioned that in the blockchain domain several unrelated things got coupled to each other unnecessarily, and there is a need to refactor blockchains to  decouple subsystems to improve on each independently. (This supports my prediction about the future of blockchains.)

After these general comments, the rest of the talk focused on networking field. David compared circuit switched networking which began in 1890 with packet switching/internet which began in 1960. In contrast to circuit-switched networks, packet switched networking is heavy on adding computation in to the network. Packet switched networking  also added huge amount of in-network storage to facilitate rate adaption between communication end-points.

Another core concept in packet switched networks has been information theory. The concept of surprise underlies information theory, as the information content is measured as the new stuff. So does this mean delivery of old content has no value? He mentioned the need to move beyond using information theory at the link layer, and adopt it for studying semantic surprise, evaluating surprisal through the lens of the applications not just the channels. High value is created by real-time fusion of new and old content in storage. We can be aggressive using the surplus capacity at the edge. This is what hyperscalars, like AWS, have been doing in some sense.

In order to probe about future trends, the talk asked in interesting question:  what will be in surplus? David identified two things being in surplus in coming years, edge capacity and high resolution sensors. This surplus makes it possible to power applications in cost-effective high resolution instrumentation of the real world. This is harkening back to the  ubiquitous computing vision of Mark Weiser.

David concluded by dropping an open ended questions about the plumbing needed for the edge computing. In order to get really good at hosting application functionality near the edge, he said that there is a need to refactor networking. A good model could to be to treat communication networks connecting hyperscalars to each other separately from the networks inside of hyperscalars and use different solutions and technologies for each.

 


Comments

Popular posts from this blog

Learning about distributed systems: where to start?

Hints for Distributed Systems Design

Foundational distributed systems papers

Metastable failures in the wild

The demise of coding is greatly exaggerated

Scalable OLTP in the Cloud: What’s the BIG DEAL?

The end of a myth: Distributed transactions can scale

SIGMOD panel: Future of Database System Architectures

Why I blog

There is plenty of room at the bottom