
A UCLA professor’s AI-generated textbook for a comparative literature course on medieval and Renaissance-era writing has sparked significant controversy in academic circles, yet its creator maintains it was a successful educational experiment.
The Controversial AI Textbook
The digital textbook, announced by UCLA in late 2024, immediately faced mockery from educators due to its error-filled AI-generated cover featuring nonsensical text like “Of Nerniacular Latin To An Evoolitun On Nance Langusages” and generic visuals unrelated to the historical period it covered.
Elizabeth Landers, a graduate student who helped create the volume, defended these errors as “an intentional artistic choice” meant to challenge students’ assumptions about language and historical truth rather than representing AI failures.
Professor’s Defense
Professor Zrinka Stahuljak, in an interview with Inside Higher Ed, called her decision to use an “AI-assisted” textbook a “no-brainer” due to the time it saved her. She expressed shock at her colleagues’ skepticism, insisting that the textbook was “carefully edited” and her own creation.
Stahuljak argued that her $25 AI-facilitated custom textbook was superior to traditional $250 textbooks that quickly become outdated. The textbook was created using Kudu, a digital textbook platform developed by another UCLA professor, using Stahuljak’s own notes rather than external sources.
Claimed Benefits
According to Stahuljak, the AI features made the material more accessible, with some students reporting they listened to it while exercising. She also claimed that student engagement increased compared to classes without the AI textbook.
The professor positioned her approach as preferable to students independently using ChatGPT, suggesting her controlled AI environment was better than “commercial generative AI-powered tools” that pull information indiscriminately from the internet.
Unaddressed Concerns
Critics point out that Stahuljak fails to address several significant concerns about AI in education:
- AI chatbots’ tendency to generate factual inaccuracies regardless of their data source
- Growing evidence that AI tools may diminish critical thinking skills and attention spans
- Broader concerns about tech companies using educational institutions to promote their products
The academic response has been severe, with one English professor suggesting such practices threaten teaching and learning, while another argued that using AI this way represents “abandonment of professional responsibility.”
Implications
The controversy highlights the ongoing tension between technological innovation in education and maintaining academic integrity and critical thinking skills, particularly as younger students arrive at college with potentially diminished reading abilities.


GIPHY App Key not set. Please check settings