Entropy Bonus is also available in audio format. That’s just me reading each entry, for those of you who would like to listen instead of read. It’s available through Substack or Spotify.
One of the most common concerns I hear is how AI will affect different professions. It’s a theme I’ll return to often. In this first post on the topic, I’m starting with the job I know best — professor. However, before diving into how I believe my role as an educator will change, I’m going to talk about how my role as a student has changed.
I’m old enough that my childhood happened before laptops and cell phones. When I had to do a project for school I’d look things up in my family’s World Book Encyclopedia. For those unfamiliar, these were popular tomes of knowledge designed to fill a shelf and serve as a household reference on virtually any topic. The encyclopedia was great. As a kid I’d have dreams of becoming the smartest person in the world by memorizing the entire encyclopedia. So I’d randomly pick up a volume and start reading (well … mostly just looking at the pictures). I’d invariably get about two pages in, get bored and walk away. School still had a lot to offer.
The internet killed the encyclopedia. When the need for some unknown fact arises in conversation around my household, someone will say “Let’s consult the silver box,” referring to one of our MacBooks.
Now when I want to learn a new subject, I don’t turn to an encyclopedia or the internet, I turn to AI. For example, I recently passed a cemetery for soldiers who died in the Spanish-American war, and I realized I remembered nothing about that subject. So I asked an AI chatbot “Teach me about the Spanish-American War”. I could have just as easily looked up facts about the war on Wikipedia, but by consulting a chatbot I could ask follow-up questions. On technical subjects I’ll ask the chatbot to use simpler language when I don’t understand something, or to provide analogies. When there are details that I think are important, or where I feel like my understanding is shaky, I’ll drill down on specific points. I’ll even periodically ask a chatbot to quiz me to check my understanding. And some AIs now provide helpful encouragement when I’m “getting it,” and gentle prodding when I’m not. In short, with AI assistance I can actively engage with information in ways that were much more difficult before. As a result, I have found that my own ability to acquire knowledge has been infinitely multiplied by AI. No school required.
But I am not a college student. When I was a college student I didn’t have the same intellectual autonomy that I do now. In some sense, that’s exactly what college taught me.
Students are often forced to take classes they’re not excited about to satisfy graduation requirements. Some students aren’t excited about any classes, and are just there to get a degree. In those cases they usually try to get through the material as quickly and with as little engagement as possible. AI interaction isn’t going to be nearly as effective with that mindset.
Humans are also social creatures. I recall learning as much from my peers in study groups as I did from my professors. Many times I would think I understood some topic until I tried to explain it to someone else. That doesn’t happen as much when interacting with an AI. I also recall specific professors who I found inspiring, who I developed personal relationships with, and who I viewed as role models. AI could never supplant that.
Knowing a lot of facts also isn’t the same as being educated. AI can be very helpful in acquiring and understanding information, but perhaps a more important role of schooling is to help us acquire skills. That is more difficult for AI to assist with, and may even be detrimental. Specific skills are worth mentioning to highlight this:
Artistic skills. It’s very hard to imagine how AI can be helpful in a painting or ceramics class, for example.
Writing skills. Among academics I’ve talked to, this is by far the most controversial. AI can help develop writing skills if used correctly. Unfortunately, students are increasingly using it as a way to avoid writing, and thus are not developing as writers themselves. I’ll discuss this in a dedicated post at some point.
Critical thinking and Problem-solving skills. AI systems may be harmful to learning these skills, since they are increasingly able to do much of that work for students. Their ability to write proofs assigned in a math class, for example, has improved dramatically.
For all these reasons I believe the academy isn’t going away any time soon. Colleges will still play a valuable role in the educational journey of many students. But colleges are certainly going to change. The onus of simply conveying information is going to shift from professor to AI, as it should: In my experience, AI (when used properly) is a more effective means to convey information than a stereotypical college lecture, and recent research supports this. For this reason, I have started to encourage students to consult an AI when they’re confused about something we’ve discussed in class. In classes I also spend time talking about effective strategies to use AI as a teacher, hoping they’ll acquire the most important lifelong skill I can pass on: the ability to self-educate.
There’s no one-size-fits-all model for incorporating AI as a learning aid in college classrooms. This is where the disciplinary expertise of faculty is going to play a key role. In some of my own classes the use of AI assistance is strictly forbidden, as they are mostly about skill mastery. In others it is strongly encouraged. Those classes focus more on understanding and digesting complex information.
Before I close, I want to address one of the most common criticisms I’ve heard about AI-as-learning-assistant: the possibility of “hallucination,” where an AI confidentially states falsehoods as facts. Many people tried some AI chatbot when they were still relatively new and hallucinations were common. As a result, there is a perception among some that AIs are untrustworthy. That’s true to some extent: AIs will never be reliable in the same way that a calculator is (more on why soon!). However, new models are being released at a fever pitch, and each one generally brings lower hallucination rates. We are now at a point where hallucinations are uncommon enough that they’re not quite as much of a concern. Even human teachers get things wrong occasionally, so the bar should not be perfection before advocating widespread AI use as a learning aid. But students must be aware that some amount of hallucination is going to happen, so old-school research skills, like double-checking sources, are still important.
To anyone still skeptical about the value of AI in the educational process (or would like to know more about its limitations), I would strongly urge you to just play with any one of the latest models. First, choose a topic that you know a lot about and ask a few questions that you think someone not as knowledgable would ask. That’ll give you a sense of how trustworthy its answers are. Next, pick a topic you know very little about, and ask it to teach you. Engage with it. Dialogue. You may be surprised what you come away with.
AI in education is a complicated topic, and here I’ve only scratched the surface. In future posts I’ll discuss AI in K-12 schools, AI in specific subjects (e.g. math, science, languages, etc), what educators can do about cheating with AI, and a myriad of related issues.
Just listened to this podcast episode which might be of interest. https://teachinginhighered.com/podcast/myths-and-metaphors-in-the-age-of-generative-ai/
The guest spoke about how pervasive he anticipates these new Meta glasses will be in the future. I wonder what thoughts or ideas you might have about this as it relates to teaching.
One issue you don't get into is the signaling / credentialing value of education. I find the arguments in (e.g.) Brian Caplan's *The Case Against Education* that signaling is the main value of education to be pretty persuasive. I haven't seen a lot of discussion about how AI will impact the signal of a college degree, or how professors should incorporate AI into their classroom to influence this signal. Your discussion, for example, only focuses on how AI impacts student learning and ignores issues like how employers will interpret degrees and grades.
My sense is that the main reason that most faculty are "scared" of AI is really about how they perceive it to effect the prestige of college. This can manifest itself in issues like grade inflation or increased cheating that reduce the value of the signal of a college degree rather than the actual impact on student learning. (Although I'm definitely putting words into other professors' mouths here and the people I'm thinking of certainly wouldn't agree with that framing.)