5 Comments
User's avatar
Melissa Chadburn's avatar

I recently listened to this episode: https://teachinginhighered.com/podcast/the-ai-con/ of Teaching in higher Ed on the AI Con text by Emily Bender and Alex Hanna and I really appreciated the unpacking of the concept of Stochastic Parroting. I participated in the norming process for FYS writing assignments last semester. One of our colleagues uploaded the rubric parameters in AI and then uploaded one of the essays they thought was exceptional. The essay performed poorly according to AI, then they uploaded one of the essays that was less innovative and it performed really well. This concept of Stochastic parroting explains why. The most elusive aspect of the rubric was in reading for intellectual inquiry. I guess what I'm saying is that I think this is helpful to me in explaining to students why generative AI

or LLM are not good at creative writing.

Expand full comment
David Bachman's avatar

Hi Melissa! Thanks for sharing your thoughts. Yes, LLMs struggle with creative writing (although some are better than others... always worth experimenting). I've heard many experts say they expect Nobel prize level work from an AI in science long before literature.

The "stochastic parrot" interpretation of LLMs, though, isn't a great mental model anymore. Especially now that RL-powered "reasoning" models have become available. I'll talk a lot more about that in my post two weeks from today!

Expand full comment
Margaret Wertheim's avatar

When I was a CS student they gave us Towers of Hanoi to solve as an introduction to recursion. After a week of cogitating, the algorithm came to me in a dream - literally. It was one of the great learning experiences of my life. I liked your explanation of what the Apple paper actually said about the AI's ability to solve it. Clearest explanation I've read yet. I do wonder: could an AI have *my* experience - of coming to a solution as a kind of revelation??

Expand full comment
David Bachman's avatar

What an interesting question! When the first reasoning models came online several people observed that the word "Aha!" actually appeared in the LLM's chain-of-thought when it had a key insight during problem solving. So maybe this kind of thing is already happening?

Expand full comment
Margaret Wertheim's avatar

Maybe androids do dream of electric sheep...

Expand full comment