Writing with AI is not writing
If you’ve been doing literally anything better with your time over the past 72 hours, you might’ve missed that NaNoWriMo – short for National Novel Writing Month – released a statement clarifying its position on the use of AI, specifically condemning opponents of AI writing tools as “classist” and “ableist”.
NaNoWriMo challenges participants to create – write – a 50,000 word manuscript each November. I first heard about it while I was at university over a decade ago, and the organisation seems to have only gained steam since then. (It has also attracted controversies, from allegations of child endangerment to a curious platforming of its sponsor’s own AI tools.)
A clarification to that statement goes on to say, “We want to make clear that, though we find the categorical condemnation for AI to be problematic for the reasons stated below, we are troubled by situational abuse of AI, and that certain situational abuses clearly conflict with our values. We also want to make clear that AI is a large umbrella technology and that the size and complexity of that category (which includes both non-generative and generative AI, among other uses) contributes to our belief that it is simply too big to categorically endorse or not endorse.”
Those reasons? Classism (“not all writers have the financial ability to hire humans to help at certain phases”), ableism (“not all writers function at the same level of education or proficiency in the language in which they are writing”) and general access issues (“all of these considerations exist within a larger system in which writers don’t always have access to resources along the chain”).
These reasons are understandable, to a point. NaNoWriMo clearly wants to position itself as an enabler of writing – and therefore writers – and to be as accessible as possible. The more people who contribute, the more writing there is in the world, and the larger NaNoWriMo grows.
People are, understandably, upset. One of the organisations’ board members resigned and the takes were hot. On the surface, it feels completely in opposition to the point of the month – to write a manuscript, not to simply have written a manuscript. On a deeper level, it taps into an increasingly uncomfortable tension in the modern age, where genuine issues of accessibility are used to trojan horse the wants and needs of Big Tech™ into a reality.
If you want an education on the ethics of generative AI, this is not the piece for you. The internet can help you out (although, perhaps ironically, Google gets less useable by the day due to AI). I believe the use of generative AI to be tantamount to mass theft, spiritually bankrupt, and frankly, really boring.
I’ve found personally that the best use of AI is that it is a really helpful red flag: if someone wants to talk seriously about it for the act of creation or maximising productivity, it’s a great indicator to leave that conversation. The exception to serious AI chat is, perhaps unsurprisingly, science fiction author Ted Chiang, whose fantastic essay in the New Yorker on AI and art might be one of the most important reads of the year.
If someone wants to generate a series of words with AI that they want to call a manuscript, a play, a poem, whatever, I have no issue with that (other than, you know, the aforementioned theft). It doesn’t make them a writer, however.
I have the hopefully uncontroversial belief that writing is a craft. It doesn’t come from the divine. It doesn’t come from the ghoul in the machine. It comes from a writer, just like a painting comes from a painting, a composition comes from a composer, a church comes from an architect and a team of people to physically build it.
And like any craft, it takes work. Being a writer, a professional writer, a good writer, takes effort. It takes reading, it takes writing, it takes learning, it takes practice, because it is a practice. Every word that I write – and I am privileged to have both a brain and a life that allows the words to flow easily – comes from me. When I sit down to write, whether it’s a piece for this newsletter, a play, or turning an interview into a feature, the words come from me. (Of course, anybody can call themselves a writer, just as anybody can call themselves an artist. It doesn’t mean what they have created inherently has value.)
AI might be able to put words in the same order as me and I shudder to know how many pieces of my writing have been fed through various LLMs, but it’s not writing. It’s not art. It’s having all the ingredients on the bench, uncooked. The process is the cooking.
In the aforementioned essay, Ted Chiang writes, “Whether you are creating a novel or a painting or a film, you are engaged in an act of communication between you and your audience. What you create doesn’t have to be utterly unlike every prior piece of art in human history to be valuable; the fact that you’re the one who is saying it, the fact that it derives from your unique life experience and arrives at a particular moment in the life of whoever is seeing your work, is what makes it new.”
AI takes away that life experience. It takes away the communication. It takes away the newness. Words generated by AI can never be art, only ever content. And if you use AI to generate those words? You’re not a writer, and you’re sure as hell not an artist.
Writing and reporting takes time, and if you want to support the amount of time it takes (and ensure that the scant amount of meaningful coverage of local art can continue), please considering supporting Dramatic Pause with a paid subscription ($8 p/m, $60 p/a) and if you can't afford a paid subscription, please share the work with your networks!
Member discussion