Rethinking Computer Literacy in the Age of AI
As model capabilities skyrocket, we need to rethink what students need to know
Over the last few weeks I’ve been writing about the recent jump in capabilities of AI coding models, and why everyone needs to care. The latest piece of evidence comes from Spotify CEO Gustav Söderström, who recently said in an earnings call, “When I speak to my most senior engineers — the best developers we have — they actually say that they haven't written a single line of code since December.” If that’s true at a company like Spotify, then what do students who will be entering the workforce in a few years need to learn right now? Education is always a moving target. You don’t aim at where the world is, you aim at where it will be.
For the last seven years I’ve been regularly teaching an Introductory Computer Science course, focused mostly on developing coding proficiency. Until very recently, I firmly believed that this was an important skill for a wide range of students, not just CS majors. Why? One reason is that coding is the only fall-back we have when we run up against the limits of a piece of software … and everyone uses software. Most consumer software allows you to enhance functionality by writing little helper programs called scripts. Software packages from Mathematica to Minecraft allow some kind of scripting interface. Excel isn’t doing the data manipulation you need? Write a script for it. Photoshop isn’t giving you the visual effects you want? Write a script for it. If you know how to write a script, it becomes your superpower.
I no longer believe coding is an important skill for non-computer scientists. As software packages incorporate LLM interfaces, instead of writing scripts you can now simply ask for what you want in plain English. And if there is no software package out there that does what you want? You can make your own from scratch, again in plain English.
The need for widespread coding literacy is dead.
So what do most students need to know now? English. But not just plain, conversational English. The English that people need to know is much more precise, like writing out mathematics. In order for AI agents to take your words and effectively turn them into code, you have to be more careful with what you say than most people are used to being. This is the skill of the future. This is what we need to be teaching now.
Each week I’ve been sharing an app I recently made with AI assistance. In my next post I’ll go into detail about the process I’ve been using to make these apps, and some tips for success that I’ve picked up along the way. The most important lesson is to be precise with your language. Here’s an example.
Last week I wrote an app for people to experiment with geometric transformations (don’t worry if you don’t know what those are). It’s quite sophisticated, but you don’t need to know much to start tinkering with it. You can play with it by clicking the image below. Full instructions for use and code are here.
Before asking an LLM to write a single line of code, I spent about two hours carefully writing down a precise vision for the app I wanted. The result was a two-page specification document that I fed to an LLM coding agent. Here’s an excerpt from that document:
… Each time a user picks a transformation, it appears at the bottom of a list in the right sidebar, and is composed with all transformations above it. By default, at all times the original image will also be present, but it will have some translucency to it. Each new transformation makes all previous versions of the image more translucent, with the oldest versions the most translucent. The geometry that defines the transformation (like the mirror line from a mirror tool) is only visible for the highlighted element of the list in the right sidebar. By default this highlighted element is the most recent tool chosen by the user, but the user can click on other elements in the list to edit earlier transformations. …
A few things to note:
It is written in English, not code
The writing is not the way people (including me) generally talk. It’s much more precise.
Every detail, from functionality to aesthetic design choices, needs to be thought through.
In addition to a complete specification document, it is common to write out tests the desired software has to pass. This allows modern coding agents to autonomously enter a prototyping loop: create software, run tests, find bugs, revise the software, repeat. Again, these tests don’t have to be written in code. They can be written in English as well, but the need for language precision is just as (if not more) important when writing tests.
So does anyone need to learn how to code? Yes, but that knowledge has effectively been pushed down the “stack” of things people need to know to understand computation. Initially that stack was just bits (0’s and 1’s), logic gates, basic arithmetic circuits (e.g. addition, multiplication), and machine code (very basic programs written out with sequences of numbers). Then low-level code instructions called “assembly language” that looked more like words (e.g. “sub”, “mul”) than numbers were added. Then higher level languages such as C were built on top of assembly language to abstract away the most routine computations. Eventually, even higher level languages that represented more abstractions, such as Python, were written on top of these low level languages. Now modern AI tools have added yet another level of abstraction, allowing us to effectively do computation with plain English.
To get a complete picture of how computation works, you need to understand every level of this computational stack. However, every time this stack has been added to, fewer and fewer people need to understand the lower levels. By the time Python was introduced, very few people needed to know how to write programs in assembly language, and almost no one needed to know how to write out raw numerical machine code. Now that we have AI coding tools, fewer and fewer people will need to know assembly, C, or Python (or any other coding language, for that matter).
So what should an Introductory Computer Science course become? A course in writing formal language, where specifications and tests are written in unambiguous terms. In other words, a course in which students have to be precise about how their applications should work, as well as how they’ll look and feel for maximum usability. This course should also focus much more on the creative aspects of software design, as students need to worry less about technical implementation details. However, logical thinking will still be a key element, addressing another common reason why many advocate for a course in coding for everyone.
Such a course will require completely rethinking activities, assessments, and the way class time is spent. That’s the cost universities are going to have to pay to stay relevant. But this shift is not a loss. It’s an opportunity to finally teach what has always mattered most: clear thinking, precise expression, and creative design. Universities that recognize this will be the ones that prepare their students for the future.
David Bachman is a professor of Mathematics, Data Science, and Computer Science. To learn more about David’s work, visit his AI speaking and consulting site, his faculty page, or explore his mathematical art portfolio.




Very nicely said Dave. I've read so much about this issue and you put it the best i've seen. I esp like this graph - it sums up what I intuited back when I was a CS student in 1980's...
"To get a complete picture of how computation works, you need to understand every level of this computational stack. However, every time this stack has been added to, fewer and fewer people need to understand the lower levels. By the time Python was introduced, very few people needed to know how to write programs in assembly language, and almost no one needed to know how to write out raw numerical machine code. Now that we have AI coding tools, fewer and fewer people will need to know assembly, C, or Python (or any other coding language, for that matter)."