What should university AI education look like?
The answer is surprisingly clear.
My colleagues are having a lot of discussions about how AI is going to affect college education. Their primary concern is about students using AI tools to cheat, which can invalidate long-standing methods of assessment (at best) and allow them to circumvent the learning process (at worst). A few of my more forward-thinking colleagues think about how to leverage AI tools to create new educational opportunities. There have also been numerous discussions about acceptable uses of AI by faculty for grading, peer reviews, committee work, etc. As ironic as it is for an institution whose primary mission is education, what’s missing from all their discussions is any mention of what students, who are about to enter an AI-powered world, need to know about AI, which is undeniably one of the most important issues facing society. What will surprise many is that the answers are actually quite simple. What’s hard is how to create the institutional will-power to implement these answers.
I shouldn’t harp on my particular institution too much. I’ve talked about this with many colleagues at other schools, and I haven’t seen anyone dealing properly with AI education. The problem is obvious: AI is moving very fast, and universities change very slowly. With that said, there is a lot that can be done by any school that realizes they have to take this seriously. What students really need to know can be broken down into three main pillars:
1) Understanding the broader impacts of AI. This is the main topic of my substack, but here I can only scratch the surface in weekly posts that take 5-10 minutes to read. All of my regular readers will know that there are serious concerns with AI and energy use, intellectual property, military development, the future of the job market, international relations (especially with China), etc, etc, etc. Every university needs to develop courses that help students unpack these issues, as they affect us all in some way or another. Such courses should be as free from bias as possible, as all of these issues are complex and always have multiple sides to them. University courses that try to convince students that AI is evil won’t serve them well as they enter a world where AI use is unavoidable. At the same time, all students should be sufficiently skeptical of the promises of big tech.
2) Understanding how to use AI. This is crucial to prepare students to function in this new world, but to really do this properly is quite difficult. At the very least, students should understand the strengths and weaknesses of the chat interface, how to properly choose models, write prompts, etc. However, surface level AI-use, such as through LLM chatting, isn’t going to be enough. Ideally, students should know about more advanced topics such as API access, RAG, agents, tool-calling, etc. While this may seem technical enough that we should leave it to the software engineers, the fact is that AI usage skills are going to be as critical as knowing how to send email, and will be expected by employers. We do our students a disservice if we don’t teach them these skills.
3) Understanding how AI works. So much misinformation about AI stems from people who don’t understand it. I’ve developed several classes at my institution to help students understand how this new technology works, but they’re just not enough. For example, in my Machine Learning class I can only spend a week at the end of the term on how modern LLMs like ChatGPT work. That really should be a whole class by itself. Furthermore, my classes require a fair amount of coding experience and mathematics, but one could certainly develop classes along these lines for a much more general student audience.
One way to partially address these pillars is to integrate AI topics into existing classes. For example, one can imagine sociology, philosophy, economics, and international relations classes where some of the broader impacts of AI listed above are discussed. Students could see content on effective AI use in disciplinary classes that leverage it. Along these lines, many science classes are now starting to incorporate machine learning tools. One can imagine history classes where students are taught how to use LLMs for text analysis, art and design classes that use AI tools in photoshop and Canva, etc. With all that said, integrating AI into existing college classes requires faculty who are open to it. Unfortunately, many faculty are somewhere between resistant and outright hostile. Even those who are more accepting may just not be willing to put in the time to learn these new tools themselves. Over time this will change as younger faculty who were educated in this new AI world come on board, but universities can’t wait that long.
Another option for universities is to develop a slate of dedicated AI classes. That requires faculty with sufficient knowledge and willingness to take that on. Furthermore, at many colleges class offerings are a zero-sum game: offering AI-specific classes means not offering something else, and that can be met with resistance. Circumventing this requires sufficient fundraising to add new faculty/classes in addition to, rather than instead of, existing ones.
Realistically, any university that takes AI seriously is going to have to do both: weave AI-related content into existing classes, and develop some number of AI-specific classes.
I’ve been thinking about syllabi for AI-specific classes for a long time. For this substack I’ve amassed a wealth of resources about the broader impacts of AI. I have lots of ideas for courses that focus on effective AI use, and even more about courses that can be taught at various levels on the inner working of AI. If you work at a university and would like to discuss this more, feel free to reach out to me!



I wish I had the time to take on all you say Dave!! I'ld love to attend your classes. I think many people are so overwhelmed with the difficulties of life at the moment and the huge economic uncertainties so many face that taking on this whole new field is something few people have the time for. I think I saw similar arguments years ago about coding. I suspect this'll be another bifurcation in our society between those who can "do AI," and those who cant...