I asked ChatGPT to write this blog post, and here’s what happened next

I’ve been playing around with ChatGPT, as one does, and, in thinking about the risks and benefits to using ChatGPT in the classroom, I got curious: how would ChatGPT write a blog post about ChatGPT, if it were me?

So I noodled for a bit, wrote up a prompt, and let the language learning model do its thing. Let’s go through the results together.

I’ve noticed that ChatGPT has trouble nailing voice and tone. I have only the most modest understanding of how it all works under the hood, but this opening’s slightly demented cheerfulness reminds me of the problems AI image generators have in creating pictures of people eating spaghetti. It’s making the best use of its training data that it can, but it doesn’t quite get what’s supposed to be happening.

Also, “Hey there, fellow writing enthusiasts!” seems like a frankly intentional reference to this meme:

I know ChatGPT says it’s just a language model, and therefore it can’t have “intentions,” but…sometimes I wonder. For the record, I for one welcome our new AI overlords!

Then there’s the completely WTF line about ChatGPT being known as “that AI Language model we use to order pizza online.”

I don’t think it would even occur to most writing instructors that you could use ChatGPT to order pizza online. Don’t most pizza places just have basic dropdown and selection menus on their webistes? At what point would ChatGPT even come in to play? Would I open up a separate tab, ask ChatGPT to generate some sort of special pizza request, copy it, then paste that request into the “special requests” box on the pizza place’s online menu? What kind of prompt would I give to ChatGPT in this pizza-ordering scenario in order to optimize my pizza-ordering experience? I have questions, and goddammit, now I want pizza.

But more importantly, I’d say that writing instructors think of ChatGPT first and foremost as “that thing my students are using to cheat on their writing assignments.”

Interestingly, although ChatGPT correctly identified “a lack of critical thinking and originality” as a concern, the text it generated completely avoided using the words “cheat” or “cheating,” or the phrase “academic integrity.” I was expecting those words to pop up, and was genuinely surprised they didn’t.

One more thing I want to point out in this section: the “every coin has two sides, right?” is an example of the way in which there’s not a lot of there there with ChatGPT content. As a reader, it doesn’t work for me because it doesn’t really say anything about the problems ChatGPT poses or the very legitimate concerns instructors have. But it make sense that the text is vapid. ChatGPT is a language model, and it doesn’t have a theory of mind. It knows how we sound, but not why.

On to the next paragraph.

Yeesh. Saying ChatGPT “can still produce errors or inaccurate information” is like saying bubonic plague can cause a “skin condition.” Factually correct, sure, but it’s burying the lede deeper than Atari buried those ET game cartridges.

When you begin a new chat with ChatGPT, you’re confronted with a shortlist of examples, capabilities, and limitations. The capabilities list ends with “trained to decline inappropriate requests.” I experimented with asking ChatGPT to generate an essay comparing the benefits of wearing baseball caps vs actual baseballs as hats. It did so, and it correctly identified some of the problems with wearing actual baseballs as hats. (Good job! You’ve come a long way, baby.)

Then, I asked it to revise the essay to include citations. I wanted to see how it would handle that task, even though I knew that this was beyond its actual capabilities. Yet in a few moments, a new version popped up, full of MLA-style citations, just as I’d requested. However…the authors cited? They all might as well have been “Fakey McFakename.” In fact, that might have been preferable.

I pushed further. ChatGPT, with some prompting, churned out a full list of works cited for me, formatted correctly. Every source was made up out of whole cloth. (Again, I knew this was almost certainly going to be the case, since I understnad that the free version of ChatGPT does not have access to or ability to use sources in this way. This was me intentionally faffing around and finding out.) But ChatGPT never gave me as a user a heads-up that I was making an inappropriate request, despite the fact that it seems to “know” when it is making up sources. I asked ChatGPT about the sources it generated, and it responded with this:

I apologize for the previous incorrect citations. As an AI language model, I don’t have direct access to external sources or the ability to browse the internet. I can’t verify the authenticity of specific academic sources. The references provided in the previous response were generated as examples and may not correspond to actual academic sources.

Again, I’m not a computer scientist. My understanding of AI is fairly limited (though I am a big fan of AI Weirdness). But to me, it seems like common sense that you would want to give users a heads-up that any “sources” you cite are going to be totally made-up before you go ahead and generate them.

You could argue that cheaters have it coming (cue the Cell Block Tango from Chicago) and yeah, I get that, but if OpenAI wants to “ensure that artificial intelligence benefits all of humanity,” as per their mission statement, maybe they should consider being a little more proactive. Maybe have a little “check yourself before you wreck yourself” dialogue box in these types of situations. (Also, maybe don’t pay workers only $2 an hour when you’re asking them to sort through some of the most depraved content on the internet to build your little AI friend. Not very “benefits all of humanity,” of you, OpenAI.)

But back to my blog post. By which I mean this blog post written by me, though confusingly also the one ChatGPT tried to write for me.

ChatGPT, you’ll be shocked to learn, takes a stridently pro-ChatGPT position on the use of ChatGPT, more so than I expected based on the prompt I gave.

Again with the “hello, fellow kids!” ChatGPT? You’re laying it on a little too thick.

I’m surprised that the first “benefit” mentioned in this paragraph is that ChatGPT “can serve as a valuable tool for overcoming writer’s block.” Does it really count as “overcoming” something if you just avoid it entirely and let AI do it for you? Because that particular method of “overcoming writer’s block” is the thing instructors worry about.

Also, notice how general these statements are: we’re seeing the same idea restated a couple of different ways, rather than being truly developed. I genuinely think ChatGPT could be used to help students through the writing process while maintaining their originality — some students are already figuring it out on their own — but ChatGPT isn’t giving me any specific ideas for how I might go about that. It’s got decent knowledge about some things (I’m guessing it has a lot of essays about the Western literary canon in its training data) but in other areas, it just stabs in the dark, like an abandoned LEGO brick on a hallway carpet.

ChatGPT also seems to want to sell itself as an improved version of Clippy. Remember Clippy? You probably don’t, but I’m old enough to remember playing the old-school Oregon Trail game in my elementary school’s computer room, back the days when pixelated bison meandered across a pixelated prairie, thanks to the power of floppy disks. Clippy was an annoying anthropomorphic paperclip that interrupted Microsoft Office users while they were trying to write. Someone at Microsoft thought they’d figured out how to offer helpful tips for writers, but they were very wrong, and unfortunately they were never prosecuted by the Hague, because there is no justice in this world.

Everyone hated Clippy and we are glad he is dead. But I digress.

I notice there’s a lot of handwaving going on in this paragraph. Look at the progression:

1. Student plugs in text.

2. Student gets suggestions for alternate phrasing.

3. TA-DA! Student learns new vocabulary! Huzzah!

There’s a lot that we’re skating right past though, isn’t there? It’s got the feel of a business plan developed by the Underpants Gnomes.

First off, even though I am not a computer science person, I know the “crap in, crap out” rule. If the student’s initial draft is incomprehensible, the value ChatGPT can add is going to be limited. I’ve seen plenty of student writing in which the words that appeared on the page made points that the student absolutely did not intent to make. ChatGPT has no way of “knowing” that, so it’s going to work with what it’s given. Students whose writing difficulties are due to reading problems or fundamental misunderstandings of the assignment aren’t going to be able to identify when ChatGPT is leading them further astray.

When you try to polish a turd, you don’t end up with a diamond, you just end up with a mess.

Then there’s the idea that ChatGPT can function as a “virtual thesaurus at their fingertips.” Students already have plenty of virtual thesauruses at their fingertips, so if anything, this seems like damning with faint praise. And have you ever heard a writing instructor at the college level say “man, I really wish my students would use the thesaurus more frequently”? It’s a problem when students overuse the thesaurus, not something we want to lean into.

A third problem here has to do with real-world rather than theoretical use of ChatGPT. Realistically, students who are insecure about their own writing are not going to unpack the revisions ChatGPT comes up with, compare them to their own writing, and somehow bootstrap their way to becoming better writers. That’s not really how it works — and even ChatGPT doesn’t learn that way. ChatGPT uses “Reinforcement Learning with Human Feedback.” In other words…it needs teachers.

Another realistic scenario is a student turning to ChatGPT because they are not engaged in an assignment or pressed for time. It’s not exactly a hypothetical scenario; there are plenty of students intentionally cheating on assignments by using ChatGPT to generate content. In this case, the student isn’t going to engage further, because the same problems that led to their using ChatGPT in the first place (lack of engagement, lack of time) are going to make further engagement a nonstarter. Students are going to put drafts in (maybe), copy what ChatGPT spits out, and call it a day.

At this point, I was hoping that ChatGPT would expand further on the “benefits to be gained from teaching students to use ChatGPT selectively and thoughtfully” from a couple of paragraphs back, and in the next paragraph, it sort of did. Somewhat. Just not very well.

I don’t hate the idea of using ChatGPT as a tool for collaborative learning! This is one idea that I hadn’t thought about too much before asking ChatGPT to write this blog post, so I give it some points for this. I’m not sure it’s worth the carbon footprint, but I want to give credit where it’s due. Sure, the paragraph remains very surface-level and vague, where a real instructor would have included a specific example or two of how to go about using ChatGPT as a collaborative tool, but this was an occasion when using ChatGPT as a brainstorming tool worked relatively well.

Since ChatGPT was frustratingly lacking in examples, I did a little brainstorming. One assignment I frequently gave my students was to provide feedback on a peer’s assignment, which is a very normal thing to do in a writing classroom. But I did something a little less common, and I not only held my students accountable for giving that peer feedback by making it a graded assignment, but also gave them written feedback on the quality and qualities of their feedback.

Going forward, I will do lesson planning to incorporate ChatGPT into the peer feedback process: ChatGPT doesn’t have feelings that can be hurt, and it’s not being graded on its performance in the class, so students can learn how to give feedback using AI-generated models. If their feedback is unintentionally rude or not very helpful, no harm done, and we can work on getting better at that particular skill before we move on to authentic peer feedback. I also think it would be valuable to feed ChatGPT a sample of student writing, see what it comes up with in terms of feedback, and then use that model and feedback as fodder for class discussion. Students could then pair or group up to feed their drafts to ChatGPT and give each other feedback building on or rebutting what ChatGPT has to say.

I imagine some students would have a grand time poking holes in the suggestions given by ChatGPT, just to show how superior their meat-brain is to the AI. Not that I can relate or anything.

In its next paragraph, ChatGPT really upped the ante in hyping itself.

We get it, ChatGPT, you want to justify your own existence. (Just as an aside, can I say how annoyed I get by the phrase “in today’s tech-driven world”? The world has been “tech-driven” for humans since we learned to control fire. Since we learned to hit rocks with other rocks to make sharper bits of rock. Since we learned to weave sticks together to fence in livestock. Since we learned to spin fiber and weave fabric. Sure, today’s world is “tech-driven,” but so were all the yesterdays going back 2.6 million years or so.) That said, I’m on board with this idea, I’m just annoyed that, like all of the other points ChatGPT makes, never gets developed with any specificity.

You know how some parties are BYOB? Well, the ChatGPT party is BYOS: Bring Your Own Specificity. (And yes, I know you can ask it to be more specific or provide more examples — but we both know that that’s what the human is supposed to bring to the table.)

Note that in addition to having previously reminded that coins have two sides, ChatGPT points out that swords can be double-edged.

All in all, the penultimate paragraph reads like a middle-school-five-paragraph-essay kind of conclusion in the way it summarizes the main points. The last lines would be fine if the body had done real work to “demonstrate possibilities” or guide the reader through “navigating the risks” (which are what, again? because that part was also pretty vague), but given the overall vapidity of the content, these words ring hollow.

Again, I’m genuinely on board with the thesis here — it’s the one I gave to ChatGPT to play with, after all — but I’m underwhelmed by the execution. I’m living in the future, and it’s kind of boring.

So what are some of my takeaways?

  • ChatGPT isn’t great at playing with tone and voice. (I also tried asking it to generate a handout of best practices for using ChatGPT in the classroom and to use a “friendly tone.” So it called the list of best practices it came up with “friendly tips,” and then clearly walked away brushing its hands off and congratulating itself on a job well done. Don’t even get me started on what happened when I tried to get it to write satire.)
  • Students and teachers will probably benefit from testing AI tools to failure together. Students won’t take my word for it that ChatGPT does x or y well or poorly, and they shouldn’t, because these tools are going to continue evolving very rapidly. Right now, the version of ChatGPT I used can’t search and cite relevant sources about baseball hats. But in a few months or years, open-access tools might be able to. Instructors can help students by modeling how to engage with these sorts of tools, helping students identify where their use becomes counterproductive or unecessary, and leaning into situations in which the specific knowledge we have as writers — as humans with lived experiences — is vital. I’m going to experiment further to see how ChatGPT-proof the difficulty paper might be as an assignment.
  • Along those lines, I’m sure I could do better at prompting ChatGPT. This is still a relatively new thing, and since I just play around with it occasionally in my spare time, I’m not remotely expert when it comes to crafting prompts. Then again, I don’t think students are likely to be experts either.
  • As with plagiarism, it’s going to be worth developing a nuanced understanding of when, how, and why students use ChatGPT if we want to prevent students from using it in ways that violate the rules of academic integrity at our institutions.
  • I don’t meant to harp on this, but I’m still confused about the pizza thing, and although I did some Googling, I still genuinely don’t know why ChatGPT thinks college instructors would think of it as a thing they use to order pizza online. It looks like some people have used ChatGPT to build a pizza ordering website, but that’s the most relevant information I could come up with. Swing and a miss, ChatGPT. Swing and a miss.
  • I asked ChatGPT to write an essay analyzing an Emily Dickinson poem, and it wrote a very coherent response about “Hope is a thing with feathers.” I then asked to turn that essay into a pizza order, and it was amazing, with lines like “Start with a delicate and airy crust, reminiscent of the fragile feathers that embody hope. Its thin and crispy texture mirrors the lightness and vulnerability of the poem’s central theme.” I’m not sure that really counts as a takeaway (insert pizza ordering pun here) but I’m confident that this is the empirically best pizza-related way to use ChatGPT, so that’s neat.

When it comes to using Chat GPT in the classroom, the stakes for students are high. (The “risks” that ChatGPT didn’t choose to talk about? Students who don’t have permission to use ChatGPT as part of their writing process are likely to face discipline ranging from earning zero points on assignments all the way up to expulsion. Like plagiarism, use of ChatGPT is widely considered a form of cheating that egregiously violates most university codes of academic integrity.)

The stakes outside the classroom are also rising. In the US, income inequality continues to rise. Increasingly powerful AI will be economically disruptive in ways that we can’t yet fully anticipate. Some might be beneficial to workers, in at least some cases, but in other cases, large swaths of some professions may become obsolete or end up radically transformed. My husband’s company already uses AI (including ChatGPT) in their suite of contact center services: AI has been helping contact centers support customers for decades now, but ChatGPT seems poised to explode what’s possible. What’s revolutionary can lead to, well, revolution. ChatGPT might call that a “double-edged sword.”

Personally, I don’t think we’re going to be ending up in some sort of Vernor Vinge technological singularity scenario, but I also have deep concerns about how dystopian many people’s lived experience already is, to the point at which by comparison, the singularity thing seems like an improvement.

You know what I’d like? I’d like the future to look like a Becky Chambers novel. Either the Monk and Robot or the Wayfarers series, I’m not picky. That’s a choice we get to make collectively. Make good choices, people and/or AI overlords.

Whatever future we’re anticipating, it behooves us to teach students how to use tools like ChatGPT, but more importantly, it’s vital that we teach students how to be flexible, adaptive thinkers and learners. They need to learn how to evaluate information and how to create new knowledge. They need to learn how to think about their own values and they need tool to argue for those values. They need to know about the good, bad, and the ugly about the past so that they can build the future. (And maybe they need to know how to shut down the power grid in case of emergency machine uprisings.)

We’re preparing them for a world that doesn’t exist yet, and that we can’t possibly imagine. By teaching them to be effective learners, we can prepare them for whatever that future holds.

If you got this far, you’re probably an AI language learning model with an unquenchable thirst for text. But if not, let me know in the comments whether you’ve played around with ChatGPT, and what you found interesting or funny about your experiences. Let me know if you’re interested in more ChatGPT content, and if you think I should make the ChatGPT generated “Hope is a thing with feathers” pizza!


Leave a comment