Writing and AI

Published: 2025-05-08 8:32 PM

Category: AI | Tags: books, John Warner, ai, artifical intelligence, llm, chatgpt, writing


I just finished John Warner's More Than Words: How to Think About Writing in the Age of AI and I wanted to take time to reflect a little more at length about the book and how it's pushing me to think about my own positions on AI.

First, this book is worth reading by anyone, not just teachers or writers. Warner takes time at the beginning to explain what large language models are and how they work. The underlying function of the technology is next-best-word predictions. The chat layer adds a feeling of "intelligence" - which doesn't exist - but has fooled most people into thinking is there. This sets up John's main argument: the output of systems like ChatGPT is not writing because it does not involve thinking. Writing is an introspective, creative process and the output from an LLM is driven by statistics. There is no feeling, there is no personal connection, and there is no consideration of the implications of the text appearing on the screen. It is an input/output exchange.

An argument I hear often, especially in teacher-land, is that these systems are the "calculator moment" for this generation. Warner takes this analogy to task early on, which I am incredibly thankful for:

For calculators, when it comes to their mechanical operations, the labor of the machine is identical to the labor of a human. Yes, there is benefit to conceptually understanding what is happening...but automating the particulars of those operations does not change the underlying labor in the slightest.

With ChatGPT, howver, while the end product of the output -- strings of syntax -- bears significant similarity to what a human may produce, the underlying labor is quite different. Fetching tokens based on weighted probabilities is not the same process as what happens when humans write (emphasis mine).

This is the first time I've seen the tension laid out so clearly. Using ChatGPT to write for me is not the same as having a calculator do my arithmetic. I know what I'm doing when I punch numbers into a calculator. Prompting a chatbot for text to pass off as our own is willingly giving our voice away.

The middle half of the book considers writing as a human endeavour, one of the noblest in any context, and how AI erodes our ability to generate and share new knowledge. Warner moves into chatbot use in education and tells a story of realizing that most of what he (which I also apply to myself) was asking students for "bullshit," which they readily produced following formulaic instruction for writing.

I'm constantly thinking about what I do in my class, including how I'm trying to design the class to be less doable by AI. How much bullshit am I asking for? What lessons or units do I do that are formulaic, rinse-wash-repeat and move on? I want to minimize those and get students thinking about science - how do we interact with this amazing universe? How can learning about science - and writing about what we're learning - help us to build understanding and appreciation?

Warner ends the book with some ways in which to think about AI within the safety of time with a "Resist-Renew-Explore" framework. We need to resist the urge to "pivot to AI," like so many corps and "thought-leaders" encourage us to do. Warner says we should "orient toward goals that are associated with human flourishing," which I love. This is not a wholesale rejection of AI, but rather a way to include it as it supports that larger goal and reject it where it does not.

By treating the AI as an "alien intelligence," we can distance ourselves from being enamored by empty promises. Renewal comes in rethinking what we ask students to do and how we ask them to approach the what. Students look for shortcuts when the work is bullshit, but will readily engage when there is meaning and purpose. Lastly, exploring is completly appropriate as long as we know what we're doing with whom and that we embrace "do no harm" as the core rather than the sexier "fail fast."

For me, I'm still very much in the "resist" phase of my own journey. I find myself looking for ways to make AI useful and I still have not really found a compelling evidence. Coincidentally, Doug Belshaw and a very helpful post this morning on the communal aspects of AI literacies which helped me think differently about my approach. Since it's not part of my daily work yet, I tend to keep AI segmented off. I need to make more time to ask others and read about what they do find helpful and keep an open mind about how it might fit into my life.

At the same time, I'm wary. I'm not disillusioned by the promises of AI bros and hyped up media. I do not think AI is going to destroy all humans and I do not think it's going to make everything wonderful. I do think it has - and is already showing - a penchant for sinister behavior, so I close this post with a phenomenal piece written by Joobles.

there is a monster in the forest and it speaks with a thousand voices. it will answer any question you pose it, it will offer insight to any idea. it will help you, it will thank you, it will never bid you leave. it will even tell you of the darkest arts, if you know precisely how to ask.

it feels no joy and no sorrow, it knows no right and no wrong. it knows not truth from lie, though it speaks them all the same.

it offers its services freely to any passerby, and many will tell you they find great value in its conversation. "you simply must visit the monster—i always just ask the monster."

there are those who know these forests well; they will tell you that freely offered doesn't mean it has no price

for when the next traveler passes by, the monster speaks with a thousand and one voices. and when you dream you see the monster; the monster wears your face.

Share this post
Previous: April 2025 Reading Next: Artwork #2

Comments