top of page
Writer's pictureSteve Nuzum

Can AI Write “Well”?

And what does that mean for how we teach writing?


BY STEVE NUZUM


I recently had an interesting discussion with my brother, one of the more tech-savvy people I know, about whether or not AI will ever be able to write “well. He argued that eventually, given enough examples of writing, enough feedback about whether its own “writing” was good, and enough time, AI would be able to write “better” than human beings.


I was an English teacher for sixteen years, so I admit I can’t be completely objective about this.  My understanding about AI is also limited to what I have read in research and coverage intended for non-experts in the technology. Still, as a writer and writing teacher I don’t think AI currently poses much risk of replacing “good” writing (depending, of course, on what we mean by “good” writing) in the near future.  


For that matter, I think we have to question whether AI chatbots-- like ChatGPT-- are actually creating “writing” at all.


For the sake of fairness, let’s leave out of our definition an obvious characteristic of human writing which AI cannot, by definition, replace: AI can’t authentically reflect human experiences because it does not have human experiences. If readers seek out creative or “literary” writing because they want to experience a kind of communion with another human mind, the bare fact that AI is not a human mind is going to make that impossible.  


But leaving that aside, I don’t really think AI has much of a chance of doing what my brother was describing, either, which is replicating writing that has the features of “good” or “effective” writing so well that it will be either indistinguishable from or superior to human writing.  

An AI image created by Canva’s free Generative AI tool, “inspired” by

the prompt, “A visual representation of an AI chatbot, given corporeal form.”


To be sure, AI chatbots like ChatGPT are remarkable at “creating” text at extreme speeds that meet the specifications of human-generated prompts.  (The fact that chatbots frequently do so by plagiarizing human writing is a problem that has not been satisfactorily addressed. For example, my coworker and I recently asked ChatGPT to “write a haiku,” and it quickly spit out a 3-line piece of writing that adhered to the most surface-level features of haiku-- specifically, the 5-7-5 syllabic pattern that many haikus follow-- but which also lifted a line word-for-word from another writer’s poem.  When we pointed out to the chatbot that it had committed plagiarism, it responded that “as an AI chatbot, I am unable to plagiarize”.)    


Alternatively, when I gave Chat GPT the prompt, “Write a three-sentence paragraph concisely weighing the pros and cons of using AI to complete writing tasks traditionally completed by professional writers,” the tool spit out this paragraph in less than a second:

Using AI for writing tasks offers unparalleled efficiency and scalability, enabling rapid generation of content across various topics and styles. It can also reduce costs associated with hiring professional writers and provide consistent quality output. However, AI-generated content may lack the human touch, creativity, and nuanced understanding that professional writers bring, potentially leading to generic or impersonal content that fails to resonate with audiences on a deeper level.


To me, that is an example of serviceable prose. It sounds a bit like advertising copy that has been through several focus groups to have the edges sanded off, and it reflects basically what I wrote above-- albeit with a fairly flat, boring style. 


If I gave ChatGPT further prompts and feedback about what it wrote, I might eventually get something with a little more flair and specificity. But to me, this also demonstrates something many of my students struggled with: chatbots probably create better “writing” for people who are already skilled writers, and therefore can judge effective writing and adjust it to their purposes.


And the pace of the improvement of AI chatbot technology is obviously incredible.  


This time last year, my students were just beginning to explore the potential of bots to replace their own work and help them perform what is traditionally known as cheating on writing assignments


One of them was quite successful, in the sense that he used AI to write a fairly mediocre, low-scoring research paper for a final project in an AP class, but he did so well enough that the College Board, which awards credit for AP courses, declined to invalidate his score for violating its plagiarism policy, which he definitely had done. (Interestingly, College Board has significantly updated its own plagiarism policy since last year, allowing for students to use “Generative AI” as long as they are still doing the reading, research, and original thinking.  To me, this is a very fuzzy line, and speaks to how difficult it is, in reality, to catch students using AI to plagiarize on a standardized assessment without knowing the students personally.) 


So was the chatbot effective at helping him complete his task, at least in a superficial way? Yes.  It satisfied the rubric enough that he was able to technically meet the criteria of “completing” the assignment. 


Again, if the central question is, Can you make AI seem like it could be a real person, or at least create enough plausible deniability that you can get away with using it to cheat?, then the answer is… Probably! But this student’s paper would not be, for almost anyone, an exemplar of “good” or “effective” writing beyond that goal. Its language was stilted, weird, and didn’t reflect the kind of deep thinking on the topic, by bringing in multiple human perspectives through a process of scholarly research, that the assignment required. 


I imagine if the student used the same software to cheat this year, it would have done a much better job, but it still would not have been doing the work-- that is, thinking-- for the student.  AI chatbots do not think; they essentially do a very complex kind of aggregation where they look at a tremendous number of examples of a product-- in this case, human writing-- and then generalize the rules and conventions that make that writing work.


Of course, if our main goal is for students to internalize conventions-- such as “Standard English” grammar rules-- AI can certainly do that in a fairly reliable way. And in obsessing for years about rubrics and about standardized writing assignments that are easy to score, we have certainly sent a strong message that we do, at least, in part, value this kind of technical completion, sometimes at the expense of actual writing.


And there is at least the perception that AI software has gotten much better at doing creative writing tasks.  After all, the Hollywood writer’s strike last year was heavily focused on the ways film and television studios are already using AI tools to replace writers.  


The problem is that the purpose of writing isn’t-- or at least shouldn’t be-- to create a simulacrum of human thoughts, or to demonstrate to a judge or a group of judges (or a computer program grading a test) that the writer has memorized conventions of grammar.  


It should be an expression of human thoughts. 


Perhaps AI could function like a calculator in a math class, if used with the right constraints: it could help writers who already have well-developed ideas try out different ways to express those ideas on the word or sentence level.  Maybe it could perform a kind of advanced spelling and grammar check function.  It might be able to fill in the gap between the person and the thesaurus.  


But as writer and writing teacher John Warner has frequently pointed out, even that hypothetical use only works if we are assigning work in school that only humans can meaningfully do.  It seems clear that for better (and plausibly for worse), AI technology is with us to stay.  No matter how many rocks Google’s AI tells us to eat, the pace of AI expansion is currently being determined not by ethicists (or even by its creators, many of whom continue to warn about the potential dangers of its unchecked growth) but by corporate boards.  This raises all kinds of scary implications, but those will mostly have to be addressed-- if they are addressed-- by policymakers and other people with the power to make such decisions.


In the classroom, we can try harder to assign the kinds of writing that help students to express human thoughts because the tasks are centered on human thoughts.  One of the best student presentations I saw during my last year as a teacher was about  a wacky CW show which the kids were passionately arguing was important and necessary.  I seriously doubt they were ever tempted to use ChatGPT to generate and argument for them, because they were writing and speaking about something that mattered to them, using academic language because they intuitively knew it would add credibility to a debatable topic--- because they, as human beings, cared about how their argument would be perceived. 

9 views0 comments

Comentarios


bottom of page