This semester I have been receiving a lot of bizarrely polished essays from my students.  They aren’t plagiarized or even straight-up usages of ChatGPT.   I haven’t seen one grammatical mistake, not one spelling mistake, but these essays don’t read like they’ve merely been through spell-check and grammar-check. There are many oddly elaborate, yet somehow simultaneously formulaic word choices.  For example, phrases like these:

  • “The diverse tapestry of linguistic diversity”
  • “profound implications”
  • “a catalyst for positive change”
  • “In essence…”

Or entire sentences like these:

  • “Let us be mindful of the role we play in shaping a more inclusive and equitable world.”
  • “While this ideology may masquerade as a beacon of clarity, it often acts as a restrictive force.”
  • “We not only advocate for diverse linguistic forms but also honor the deep cultural narratives they embody.”

What is going on here? 

Taking a citizen sociolinguistic approach, I talked to several “authors” of such phrases and essays to try to answer that question. I tried to remain curious, not judgmental:  What is their process? What tools are they using to help with their writing.  And, why?

Several different strategies emerged in our conversations. Examples include, from most to least complicated:

  • Writing an essay in Chinese, then getting three translation options from GoogleTranslate, choosing the “more academic” seeming version, then running that paragraph-by-paragraph through Grammarly, selecting the “professional” setting. 
  • Writing an essay in English, then using ChatGPT to review and edit, specifically directing it to adjust “phrasing” and “coherence.”
  • Writing an essay in English, then running each paragraph through Grammarly, using the “professional” option. 

Most students avoided mentioning the highly stigmatized ChatGPT, and some even declared that they hate ChatGPT, but a couple did mention it as a useful tool for “brainstorming,” (if not for editing as mentioned by one).  

Why use these tools?  These strategies seem time-intensive, and the results highly variable: In the best case, a mediocre paper, in the worst, a practice punishable as academically dishonest.  Students presented similar backstories to make sense of their AI practices.  As both undergrads and graduate students, in the United States as well as China and Canada, students have been told by professors and teaching assistants to improve their writing, often receiving advice like the following:

  • “Your words are too simple.”
  • “Your writing is too personal.” 
  • “You need more transitional phrases.”
  • “You need more professional words.”

Some professors even allow the use of AI tools, if the students acknowledge their use. So, the students have developed strategies that directly address the writing advice they’ve received: They run an essay through Grammarly, selecting the “professional” output setting, they choose a Google-translated option that seems to have fewer “simple” words, they make explicit requests for transitional phrases (see “In essence,” “Let us be mindful,” “We not only advocate…but also honor” above).   

This seems legitimate, since, the students say, these essays still contain their own original ideas.  But after one runs an essay through the AI wringer, ideas can be hard for a reader to detect anymore.  Originality? Lost. 

Despite my sincere efforts to remain a curious explorer and not judge these writing strategies through the lens of Aging Professor, I find them disturbing.  A few analogies to the AI takeover of student writing began to simmer in my brain:  

The first may be a bit obvious: Frankenstein’s Monster.  We have created a monster (AI writing tools) that we can no longer control.  When something written by a human individual goes through Grammarly and comes out radically different, that human individual loses their voice.  And, if Grammarly has chosen vocabulary unfamiliar to that human individual, the original writer doesn’t know what they are saying anymore. If that human happens to be a university student, they no longer know how their writing might sound to a professor or teaching assistant—or whether their original ideas remain original.  The essay becomes like Frankenstein’s monster, out of the hands of its author, doing things that author no longer has any control over.  Ultimately, that monster turns on humanity and must be killed. 

Another ominous analogy, less rooted in Victorian fiction: A Self-Driving Car.  I’ve asked several people if they would be willing to completely cede control to a self-driving car, spending their mornings in the car reading the paper, preparing for class, talking with their kids, and letting the Artificial Intelligence handle the driving. Everybody has balked at that idea—some intuitively uneasy with giving so much control to a complex activity like an urban commute during rush hour, others citing YouTube videos that illustrate the kinds of disasters such negligence has already wreaked.  Like AI writing assistance, a self-driving car simply doesn’t have a sense of the complex context of its activity—or the very sensitive nature of human beings.  My dad also pointed out that “Driving is fun!” Why let an AI-tool do all the fun part? I hope some readers see writing this way as well.  Writing is fun!  Like driving, it potentially gives us a sense of freedom—we can say anything! But both AI driving and AI cyborgian writing, seem overly concerned with standardization, which inevitably eliminates both the fun and the humanity involved with either of these activities. 

The mention of “fun” also brings me to my third analogy:  The Drum Track.  Many songs get along just fine with a non-human drum track.  But take a listen to a song recorded with a human drummer, or go to a live concert.  Listen to that drummer: Do they play the same pattern again and again? Or, do they surprise you with a jump on the established rhythm, or a withheld beat?  How does this affect your experience of the song?  While it may sometimes be fun and useful for musicians to use a drum machine to provide a driving beat, it’s nothing like the actions of a live drummer—even if that drummer makes mistakes now and then.  Like creating music, writing involves establishing your own rhythm, your own voice, and that can’t be achieved with tools like Grammarly and ChatGPT, the writer’s equivalent of a monotonous drum track.    Rather than turning to standardizing tools to shape an individual’s writing voice, one might instead focus on reading works by talented writers, engaging more fully with writing that does *not* read like a monotonous drum track. As students and professors, we should build our own writing (and writing advice) on those models we most admire, not the most pedestrian standardized versions pumped out by AI.  

By now, my opinion may be all too painfully clear.  The monster must be killed, or ultimately, it will kill us, or at least take a very large bite out of the humanity and joy of writing.  Thus, my suggestion to students: Don’t use these tools! To Professors: Try to refrain from encouraging students to engage with them, even in a cyborgian compromise.  Consider what you sacrifice in the long run, consider the purpose of education.  

Readers may have alternative opinions, and more practical suggestions.  Please share your comments below! Or have ChatGPT express an opinion—I’m curious to see what it might “say.”  

One thought on “Citizen Sociolinguistics and AI-Assisted Writing

  1. I did a bit of an experiment with my students last year. They were Accelerated and after I got to know them a bit, I let them use ChatGPT with a specific purpose and instructions. They were to feed it the rubric, the prompt, and their essay, asking it to grade it and to give them commentary back. They were not allowed to ask for rewrites or replacing of words. The result was, overall, better essays. But I agree on the loss of individuality and voice for Grammarly and other such tools, though I’d note the above experiment accounted for that.

    Like

Leave a comment