The demise of coding is greatly exaggerated

NVDIA CEO Jensen Huang recently made very contraversial remarks:

"Over the course of the last 10 years, 15 years, almost everybody who sits on a stage like this would tell you that it is vital that your children learn computer science, and everybody should learn how to program. And in fact, it’s almost exactly the opposite.

It is our job to create computing technology such that nobody has to program and that the programming language is human. Everybody in the world is now a programmer. This is the miracle of artificial intelligence."

I am not going to wise crack and say that this is power poisioning and this is what happens when your company valuation more than triples in a year and surpasses Amazon and Google. (Although I don't discount this effect completely.)

Jensen is very smart and also has some great wisdom, so I think we should give this the benefit of doubt and try to respond in a thoughtful manner. 

A response is warranted because this statement got a lot of publicity, and created confusion for a wide range of people as this comes with some authority behind it. My brother asked me about this, presumably because he wanted to see how he might want to direct the education of his children.

My response is not motivated by turf-defending or out of job security concerns with the rise of AI. I am a researcher, and my day to day job is not coding/programming. I don't feel threatened a bit about proliferation of AI tools.


Coding is dead, long live coding

With every new advancement in programming languages and technology, this concern came up anew. Some people declared coding is dead, and some people freaked out. What ended up happening over and over is that we just had a higher level specification/abstraction and the demand for coding/programming went up thanks to those new developments. Moreover, old programming languages and their niche still stayed mostly undisturbed. After more than six decades, Cobol is still widely used in applications deployed on mainframe computers, such as large-scale batch and transaction processing jobs.

This comic strip, from CommitStrip (2016), sums it up well. There will always be coding. The abstraction level may go up, we may start using domain specific languages (DSLs), but we will still need to be precise and comprehensive in our specifications to solve real world problems. The world is very complicated, there are corner cases everywhere.

Natural language is ambigious and not suitable for programming. LLMs still need to generate code to get things done. If not inspected carefully, this incurs tech debt at monumental speed of the computers. The natural language prompts are not repeatable/deterministic, they are subject to breaking any time. This makes "natural language programming" unsuitable for even small sized projects, let alone medium to large projects. 

Moreover, some things are inherently very hard, they are AI-complete (to adopt the term NP-complete to the occasion: hardest problems to which solutions can be verified quickly, but not necessarily able to be found in any reasonable time). I use TLA+ for modeling and designing distributed systems and algorithms, I don't see AI replacing that anytime soon. There is simply too much sublety and a great deal of intelligence and expertise is required to work at the design of distributed systems and algorithms.

As my final argument (borrowing from one of my previous posts), I like to mention that a career in computer science and software technology (practicing coding) gives you vital and generally applicable skills: hacking, debugging, abstract thinking, quick learning/adaptation, and organizational skills.

Being supported by AI tools is not a substitute for mastering these skills. You cannot borrow skills/wisdom, you need to earn and own them. As the Turkish proverb says: "You cannot drive a water mill with hand-carried buckets of water". Or as the Amazonian proverb says: "There is no compression algorithm for (hands on) experience".


What next?

Innovation begets innovation. The emergence of new problems and domains is a great equalizer. As we discover things, we find new terrains open up. And a new terrain is a good opportunity to make impact without needing immense resources. AI is taking off (with a long arduous journey ahead), so this is a great opportunity to take up on computer science and coding. AI is software, and one day it will start producing software, so this only means it is a ripe opportunity to learn and work on software.  

For the future, Jensen Huang suggested that "students should focus more on fields like biology, teaching, industry, or farming." This is bad advice again. Let people pursue their passion. (Unlike Calvin Newport, I am strongly on the passion camp.) If any of biology, teaching, industry, or farming is your passion (you will know if it is, it won't be ambiguous), pursue them. But it is very misguided to direct people away from computer science and software technology saying AI will take care of that and make it obsolete.

I think it is time to double down on computer science and software technology. I think we will start seeing computer science and software technology going further in to K12 school curriculum. We will start to see more Pi-shaped people, who have depth at two areas and who pursue generalist applications. After building some depth, being a generalist is a good strategy.

Finally, let me air some grievence about a pet peeve of mine. Imagine the breakthroughs we could achieve, if only we could channel 1% of the resources/effort/interest being directed to researching/developing machine learning to researching/developing human learning.

Comments

Anonymous said…
The way to "talk" with a computer seems to always have been a matter of controversy.
I think the argument of Dijkstra stands the test of time https://www.cs.utexas.edu/users/EWD/transcriptions/EWD06xx/EWD667.html
Anonymous said…
AI can outperform humans in certain tasks. For example, AI is superhuman at chess. Can AI master the task of coding, and what does that mean for the rest of us?

At the moment, code generation is somewhat limited. Devin, the SOTA in AI software engineering, is capable of relatively simple tasks, and makes glaringly obvious mistakes. Human supervision is still required.

The much more plausible reality is senior+ engineers using AI will increase >=2x productivity, removing previous need for junior engineers. AI models won't replace juniors, but seniors with AI will.

AI's principal restraint is the bedrock of statistical inference, that our models for causation are primitive. Causal reasoning cannot emerge from statistical models generated through observations. A model for causal reasoning would require a breakthrough in statistics or cognitive models. Until then, models are weak in reasoning, and thus coding.

I am interested in how, seemingly, new software products aren't booming around the corner. GPT & co undoubtedly makes a large chunk of coding tasks easier, so you would expect an inspired rise in product entrepreneurship.

Popular posts from this blog

Hints for Distributed Systems Design

Learning about distributed systems: where to start?

Making database systems usable

Looming Liability Machines (LLMs)

Foundational distributed systems papers

Advice to the young

Linearizability: A Correctness Condition for Concurrent Objects

Scalable OLTP in the Cloud: What’s the BIG DEAL?

Understanding the Performance Implications of Storage-Disaggregated Databases

Designing Data Intensive Applications (DDIA) Book