When I created Foundations of Generative AI last summer, I committed a cardinal sin: I quoted a book I hadn’t fully read. I saw the quote from the authors on LinkedIn and read the first chapter, but I hadn’t read the entire book.

Fortunately, the book is fantastic. The book AI Snake Oil by Arvind Narayanan and Sayash Kapoor, is a fantastic exploration of some of the undeserved hype, overblown risks, and understated downsides to artificial intelligence.
But there was something interesting as well: the audiobook had a bonus chapter, a sort of podcast about what’s new since the original publication date. One of the things this bonus chapter surfaced was the idea of using AI for literature reviews, and its tendency to reinforce a “rich get richer” approach.
That concept was the core conceit of what I wrote a few months later in From model collapse to citation collapse: risks of over-reliance on AI in the academy.
I’ve tried to find anywhere else that idea might be unpacked more completely. They also wrote a fantastic article on their blog titled “Could AI slow science?” in July 2025 that alluded to the idea, but the audiobook unpacked it in more detail. There have been other articles about this, like this one, and this one, and this one, but none of them really directly tie into the bonus chapter’s notes.
So, I’m sharing this for three reasons:
- AI Snake Oil is a great book.
- Citation collapse was already getting discussed before my article. I wish I’d listened to the book first so I could have tied into their commentary on the topic.
- If anyone knows more specifically what they’re talking about in the bonus chapter, let me know!















