When everything is AI, nothing is.
Everyone is using the term, but what does "AI" actually mean?
Free audio version available here.
The term “AI” is now inescapable. Nearly every new product claims to be “powered by AI.” But in a world where the term is used so loosely, it’s lost much of its meaning. In this post, I’ll unpack what “AI” actually refers to—and how that meaning gets stretched by advertisers, AI enthusiasts, and AI critics. Understanding these distinctions can be crucial to any serious conversation about AI’s risks and benefits.
The idea of Artificial Intelligence dates back to the early days of computing in the 1940s and 1950s. The first test for when we’ve achieved true Artificial Intelligence is due to Alan Turing. Turing is now best known for his work cracking the German “Enigma” code in WWII, as portrayed by Benedict Cumberbatch in the movie The Imitation Game. But Turing is also known as the father of the modern field of computer science, as well as one of the key inventors of the modern computer. An AI passes the “Turing Test” when a human can’t distinguish between talking to it and talking to another human. We now have AI systems that pass the Turing Test.
Through the second half of the 20th century, two distinct approaches to AI emerged. The first was a rule-based approach, in which a computer uses a set of well-defined heuristics to respond to a user. For example, a rule-based chess program would literally be a set of instructions to the computer of the form “If your opponent does this and the board looks like that, then do XXX.” Rule-based AI systems had a lot of success, perhaps most famously culminating in Deep Blue, the 1997 IBM system that beat the world chess champion, Gary Kasparov.
The second approach to AI was to simulate the human brain. This involves defining small computational units that model a single neuron, and then connecting those artificial neurons into complicated webs that we call Neural Networks. At first neural networks weren’t much use, until various conceptual and practical advances were made. Most significantly, real applications had to wait until special computer chips were introduced that could perform many calculations at the same time, rather than sequentially. Nvidia was the pioneer in this industry, initially creating such chips in 1999 for video game systems. That’s why Nvidia is now one of the most valuable companies in the world—and at the center of today’s tech and geopolitical tensions.
“AI”, as a marketing term, now generally refers to any neural network. From simple phone apps to washing machines, all the way to ChatGPT. They’re all being sold as AI, and the only thing they generally have in common is that they’re neural networks. That makes real debate over the pros and cons of AI very difficult: These systems have vastly different energy demands, privacy concerns, regulatory challenges, etc.
Both proponents and opponents of AI overuse the term. Proponents will point to amazing advancements precipitated by AI in everything from logistics streamlining to DNA sequencing. These are all neural networks, but they may not have much in common besides that. Similarly, opponents will point to energy demands, privacy concerns, and existential risk. While these are real issues, they don’t apply to all AI systems equally, and yet many opponents lump them together and suggest banning (or at least boycotting) all AI on the same grounds.
In November of 2022, OpenAI released ChatGPT, and overnight AI entered the public consciousness. ChatGPT is a neural network specifically designed to process language and respond with language. It’s one of several massive neural networks, now referred to as Large Language Models (LLMs). Shortly before that, several competing systems were released that process language and respond with images. Together, LLMs and image-generating neural networks (and more recently, audio and video generating networks) are commonly referred to as generative AI systems.
Most of the issues people have with AI are really issues with generative AI. However, even that distinction can get murky. Some non-generative applications are built on top of generative neural networks through a process called “fine-tuning.” Examples include models for fraud detection, medical diagnostics, or predicting weather patterns. Some applications, such as object removal in photographs, are considered generative, but clearly not in the same way as creating an entire image from a prompt.
Even within the more specific category of Large Language Models, there’s wide variation—so wide, in fact, that generalizing about them is often misleading. Here are just a few of the dimensions on which they differ:
LLMs now come in a huge variety of sizes. Some are small enough to fit on a laptop, and some need millions of dollars worth of servers to run. Different sizes have different energy demands, security considerations, capabilities, etc.
Some LLMs are completely open-source, meaning all of the details of the models are publicly available, and some are based on carefully kept corporate secrets. Still others are “open weights” but not “open source,” meaning you can download and run the models for free on your own isolated system, but you don’t have enough information to reproduce the models from scratch.
Different companies use different information scraped from the web and other sources to train their models. Some companies are more careful than others about using proprietary information in their training data.
There is huge variation in privacy concerns when using LLMs. Deepseek shares their data with the CCP. Facebook’s LLM interactions are completely public. Apple’s AI queries are fully private when simple enough to run on-device, but more complex requests are routed through OpenAI’s servers. Most companies have an option to let users keep their LLM interactions from entering their training data, but in some that’s the default while in others you have to proactively enable that.
The next time you hear an advertisement touting a new “AI-powered” product, stop and ask yourself, “What exactly do they mean by AI?” When someone advocates a ban on AI, push them to specify which AI and why. And when proponents extol AI’s virtues, challenge them to clarify precisely what systems they’re referring to. Without this deeper reflection, we risk letting “AI” devolve into a meaningless buzzword —obscuring genuine progress, hiding legitimate risks, and making informed, nuanced discussion nearly impossible.
Useful read! On the study cited to support the claim that we have current AIs that can "pass the Turing Test": I've always thought that the 5-minute limit on the Turing Test suggested by Turing is way too short. When I've run Turing test-like demos in class, I find it takes students multiple trials and many minutes to figure out how to approach the test.
Also, 73% pretty impressive (and hints at "more human than human!"), but I wish the paper explain more clearly what the prompt was to the humans in the test, as that presumably has an impact on how the humans engage the task. Finally, I suspect that the UCSD students did better than the Prolific group because they were more likely to know what a "Turing test" is and optimize their strategy.