Jon Howell AMA at SOSP21 (Day 2)

 There were several AMAs at SOSP21, where the attendees can ask questions. I really like Jon Howell's AMA session. I asked two questions I was interested about learning.  


Why do you think verification is at a breakaway point now? What are the factors that make this the go-big moment?

We are now figuring out how to make common problems cheap. Historically Coq proving was about being clever: do something not done before. Now we are focusing on how do we make the common stuff cheap. The Ironclad project began when Rustan Leino came to our team, and said I have a tool, let's use this tool. The Dafny demos were interesting to me. A red squiggly line showed a bug. When you give hint, the bug went away. It was a lightbulb moment. It took me extra line of code/explanation for me to get it. The verifier already got it. We had verification runouts in slicon valley,  with tlc, tlaps and others. The verifier send back question mark, and we try to decode it, what we realize was, almost all what we do was tedious stuff. The magic in dafny is it makes the tedious stuff go away. If I spent 30 minutes to argue with Dafny to understand something, rather than half a day with testing, it is a cheaper way to learn what should the structure of my program be.


Specification is very useful, but hard to sell. Verification is exciting to me because it has inherent value. When I am trying to sell this, I don't tell them it will help you prove your code correct. This is becoming a viable alternative to testing. This can help you find things you don't find with testing.


Verification will not sweep the industry. For someone writing a php frontend app, that doesn't require verification, no need to bother. But for many projects, latent bugs really hurt us a lot. Some bugs are very painful and expensive.



How do you make verification more maintainable as the code changes?

Having a specification & verification language that's seamlessly integrated with the implementation language is a prerequisite. Verification can't be something that comes along later and points at the code. It's really the other way around: the first thing you read is the overall correctness argument, then you work your way down in the details and see how the code implementation conforms to that argument.


That does imply that I think writing verified code will, in general, require top-down thinking. That is, if we don't know yet what shape the system is, might as well hack together something in Python. But when you're ready to make it robust, you'll find it's easier to refactor the code (top down, high-level arguments first) than it is to try to wrap a proof around disorganized code.


It's a reasonable complaint! I think the answer is that today's tools can make it hard to write maintainable verified code -- that is, a maintainable proof artifact. An explicit goal I have for our current project -- a verified high-performance storage system -- is to produce a maintainable artifact. My hope is that people will try the thing because it's useful, and then pop open the hood to figure out how it works, and say "wow that's interesting." Accomplishing that means more time spent on robust engineering and more time spent improving tools to enable a prettier artifact.


Misc


Why did you pick distributed systems as your topic?

I don't know... taste.


Read lots of papers. It is important to identify cracks you can work on. Help somebody else build something, and you will find a problem. 

Comments

Popular posts from this blog

Hints for Distributed Systems Design

Learning about distributed systems: where to start?

Making database systems usable

Looming Liability Machines (LLMs)

Foundational distributed systems papers

Advice to the young

Linearizability: A Correctness Condition for Concurrent Objects

Understanding the Performance Implications of Storage-Disaggregated Databases

Scalable OLTP in the Cloud: What’s the BIG DEAL?

Designing Data Intensive Applications (DDIA) Book