I Would Have Cheated in College Using ChatGPT

I Would Have Cheated in College Using ChatGPT

Source Node: 1777735

As I outlined recently in my “e-Literate’s Changing Themes for Changing Times” post, I am shifting my coverage somewhat. I’ll be developing and calling out tags I use for these themes so that you can go to an archive page on each one. This one will be listed under the “AI/ML” “third-wave EdTech,” and “future of work” tags.

ChatGPT is creating all kinds of buzz about students cheating on essays. So it got me thinking. If I had had access to a tool like ChatGPT when I was in college, would I have used it to cheat?

“A robot writing an essay” as interpreted by DALL-E 2

Yes. Absolutely. One hundred percent. But I wouldn’t have thought of it as cheating.

Cheating is a state of mind

When I was in college, I had a massive chip on my shoulder. If I caught the slightest whiff that the professor didn’t care whether I was learning or the assignment was not well-designed to help me learn something, I would immediately flip into “grudge” mode.

I never cheated. The whole point, in my immature mind, was to prove that I was smarter than the professor. So I would stay up late, party the night before the assignment was due, and give myself three or four hours to write it the next morning, hung over, before I had to turn it in. To give you a sense of just how far these grudges went, I didn’t actually enjoy partying very much. Sometimes I would go out of my way to do so because of the stupid assignment. It was all part of a game to challenge myself. No pain, no game.

I wasn’t always so self-destructive about it. If I felt the professor was genuinely interested in my learning but had simply written an assignment I wouldn’t learn from, I’d take the writing prompt and try to be as creative—and subversive—as possible. But that only worked when the professor would understand and appreciate the joke. If I had a professor who had (in my judgment) given me a coloring book and was going to grade me on whether I colored inside the lines, that triggered my worst adolescent self. While I didn’t cheat in the conventional sense, my goal was to minimize effort required to get an adequate grade and show myself how smart I was in the process. My social contract with the teacher was no less broken than the student who copied somebody else’s work. I once defined cheating as “engaging in behaviors that are intended to facilitate passing without learning.” By that standard, I cheated my ass off.

Something like ChatGPT would have been part of the game to me. If you’ve played around with it at all (or read some articles about it), you’ll know that writing prompts that lead the AI to generate good text is something of an art. I would have spent a lot of time crafting the best prompt possible. Then I would have edited the output. I still would have wanted to produce a good essay. If this process took more time to produce the end result than just writing it from scratch myself, it wouldn’t have mattered to me. I would have subverted the assignment into something that actually challenged me while flipping the bird to the instructor who had the temerity to underestimate or underappreciate their students in general and me in particular. That was the whole point of the game. I wanted to learn. And I wanted to care. Nothing pissed me off more than a professor who wasted my educational time.

ChatGPT wouldn’t have violated my “no cheating” rule because I wouldn’t have been cheating according to my rules.

Faculty tend to think that cheating is defined by their rules and the college honor code. The reality is far more complex. For me, it was heavily influenced by my social contract with each teacher, whether I felt they were holding up their end of the bargain, and how I could turn every assignment into a game that was a fun mental challenge. Other students may be influenced by which assignments they think are important for their career goals, how much work is reasonable to expect of them and, very often, or whether they think the instructor cares about their learning.

I can’t emphasize that last point enough. I’ve conducted a fair few focus groups with students over the years. The results consistently supported the research evidence that students are heavily influenced by whether they believe their teacher cares about their learning. And this manifests in surprising ways. I remember one particular focus group vividly. We were talking about what factors cause students to engage in with a class more than they expected or planned to. They all agreed that having a teacher that cared about their learning was a major factor. I asked them how they can tell if a teacher cares. I got some of the usual, expected answers like “she knows my name” or “he talks to me after class to see how I’m doing.” One surprising answer that elicited a lot of nods from the group was, “She picked a random student during class and asks them a question about the reading. She didn’t grade us on the answer but it’s embarrassing if you don’t know it.”

“You liked that?” I asked, somewhat incredulously.

“No,” he replied. “I hated it. But I know she did it because she wanted us to learn. So I did the readings.”

ChatGPT as a tutor

The current fad with journalism about ChatGPT is to ask the bot some version of, “If you were writing this article, what would you write?” The value of that exercise depends on the incisiveness of the commentary regarding the output, the prompt, and the relationship to writing craft. Examples of this genre can range from pretty interesting to pretty terrible, depending on the quality of thinking of the human “writer” involved. Bad articles end either in “WHEEEEE!” or “NO HUMAN WILL EVER LEARN HOW TO WRITE A COMPLETE SENTENCE AGAIN!” Better articles give us insight into what the tool is good for as a writing aid and how that reflects on writing craft.

Let’s see how well I fare at this genre. Can ChatGPT write this blog post for me? The base question I’m after is how to use a tool like ChatGPT to engage students in the craft of writing the way it would have engaged me. As you’ll learn by playing around with the tool or even reading some of the better articles on it, part of the craft in getting useful answers out of ChatGPT is writing a good prompt. My prompts in the experiments below are in bold. ChatGPT’s answers follow.

I’m a college philosophy professor. I have created an assignment asking students to write an essay on a particular topic. They are allowed to use ChatGPT to generate their first draft but will be graded on the accuracy, originality of thought, and writing quality of their final draft. They will also be evaluated on the quality of the ChatGPT prompt that they write. What are the educational strengths and weaknesses of this assignment?

There are both strengths and weaknesses to using a tool like ChatGPT for a college philosophy assignment. One potential strength is that it can help students generate ideas and get started on their essays. This can be particularly helpful for students who may be struggling to come up with a topic or approach to the assignment.

Another potential strength is that using ChatGPT can help students practice using technology in their writing. This is an important skill in today’s world, and it can help students develop the ability to effectively use tools like ChatGPT in their future academic and professional endeavors.

On the other hand, there are some potential weaknesses to this assignment. One is that ChatGPT is a tool that generates text based on prompts, but it does not have the ability to understand the meaning or significance of the words it generates. This means that the essays produced using ChatGPT may not be as original or thought-provoking as those produced by the students themselves.

Another potential weakness is that using ChatGPT could discourage students from engaging with the material in a deep and meaningful way. Instead of reading and thinking critically about the assigned texts, they may simply use ChatGPT to generate a first draft and then submit it without further thought or analysis.

Overall, the use of ChatGPT in a college philosophy assignment has both strengths and weaknesses. It can be a helpful tool for generating ideas and getting started on an essay, but it should not be used as a substitute for careful reading, thinking, and writing.

My dialogue with ChatGPT

Meh. ChatGPT’s answer has all the hallmarks of an adequate but unimaginative high school essay, right down to the form. It has not produced an essay that is “original or thought-provoking” or written content that engages with the prompt “in a deep and meaningful way.”

We can marvel at the ability for the technology to do this well. It is genuinely miraculous and merits a brief pause for a moment of wonder. That said, the content itself is not enlightening or even particularly interesting by human writing standards.

Let’s try again. I’ve altered the prompt using language from GPTChat’s output in our first try. To make the changes more obvious, I’m showing the text I deleted from the original prompt and underlining the text I added:

I’m a college philosophy professor. I have created an assignment asking students to write an essay on a particular topic. They are allowed to use ChatGPT to generate their first draft but will be graded on the accuracy, originality of thought, and writing quality of their final draft. They will also be evaluated on the quality of the ChatGPT prompt that they write. What are the educational strengths and weaknesses of this assignment? How can I improve this assignment to encourage students to produce more original, thought-provoking assignments in which the students engage with the material in a deep and meaningful way?

One way to improve this assignment would be to provide more guidance on how to use ChatGPT effectively. For example, you could provide students with examples of good and bad ChatGPT prompts, and explain how to write a prompt that will generate thoughtful, original responses from the tool.

Another way to improve the assignment would be to require students to engage more deeply with the material before using ChatGPT. For example, you could ask them to read and annotate the assigned texts, or to participate in a class discussion, before using ChatGPT to generate their first draft. This will help ensure that they have a strong understanding of the material and are able to generate original, thought-provoking ideas on their own.

In addition, you could also consider modifying the grading criteria for the assignment. For example, you could place less emphasis on the accuracy of the ChatGPT-generated text, and instead focus on the student’s own writing and analysis in the final draft. This would encourage students to take the time to engage with the material and produce original, well-written essays.

Overall, there are several ways you could improve this assignment to encourage students to produce more original, thought-provoking work. By providing more guidance on how to use ChatGPT effectively, requiring deeper engagement with the material, and modifying the grading criteria, you can help students develop the skills and habits needed to produce high-quality work in their philosophy studies.

My dialogue with ChatGPT

This answer is more interesting. The first suggestion—teach students how to write effective prompts—is a good start. I do think this is a skill that will be taught widely in schools within five years. (It should be sooner than that and it realistically might be later.) I don’t agree with the suggestion to de-emphasize the accuracy of ChatGPT’s text. Evaluating the quality of source material is an essential skill which becomes more important, complex, and interesting with the introduction of tools like this one. That said, a lot depends on context. I could imagine assignments for which I would take ChatGPT’s advice on this point. Notice, though, that the structure of the “essay” ChatGPT generated is still pretty stiff and formulaic. Leaning too heavily on a tool like this could have the effect of cultivating competent writing at the expense of stifling great writing.

Still, I can see the shape of a pedagogical process—and preferably a supporting end-to-end tool—that teaches many of the skills involved with good writing, including some hard ones like checking sources and editing—while including some elements of creativity. If it is scaffolded properly—again, with the right tool and process but also with a good, solid rubric—it could enable educators to spend more of their time honing in on specific aspects of the writing process with less drudgery. Particularly if used judiciously as part of the writing curriculum rather than the whole thing, it could be quite useful.

It’s also honest. It strengthens rather than weakens the social contract between student and educator by allowing the students to use a tool as long as they are open about it and are using it as part of a genuine learning process rather than a shortcut around thought work.

Would I use ChatGPT to help me write blog posts?

In principle, I have no problem with the idea of using machine-generated text in e-Literate posts as long as it is properly attributed. In practice, I haven’t been able to figure out a way to make it useful.

Part of the value of e-Literate is that it can be surprising in both content and form. Novelty teaches while entertaining. Could a tool like ChatGPT develop the right sort of novelty to fit with this blog? On the surface, it probably could. We’re already starting to see examples of the tool being asked to write an article on X subject in the style of Y person. If trained on the thousands of posts I’ve written, I suspect that a tool like ChatGPT, or maybe the next generation of it, could learn to use more em dashes, write convoluted sentences, and be more snarky. It might even incorporate some themes that show up in my posts.

But it doesn’t actually understand anything that it writes. ChatGPT distills and synthesizes answers that have already been given. It can only write about ideas that I have already thought of and written about. It can’t write about the idea I’m going to have tomorrow. e-Literate isn’t about what I know. It’s about what I’m learning. As such, I don’t yet see how I could get much value out of a tool like ChatGPT in the foreseeable future, at least for this blog. The only exception I can think of is for posts like this one that are about the tool. ChatGPT didn’t really write part of this post for me. It generated artifacts for me to analyze in my own writing.

These lines of demarcation—the lines between when a tool can do all of a job, some of it, or none of it—are both constantly moving and critical to watch. Because they define knowledge work and point to the future of work. We need to be teaching people how to do the kinds of knowledge work that computers can’t do well and are not likely to be able to do well in the near future. Much has been written about the economic implications to the AI revolution, some of which are problematic for the employment market. But we can put too much emphasis on that part. Learning about artificial intelligence can be a means for exploring, appreciating, and refining natural intelligence. These tools are fun. I learn from using them. Those two statements are connected.

Would I teach writing using ChatGPT?

If I were teaching writing today, would I use an AI tool? In practice, probably not, simply because it would be too much work to cobble together the pieces.1 This is the perennial challenge of EdTech, which, on balance, creates a vastly underestimated drag on the amount of time educators have to put into the thought work of delivering high-quality education. In principle, though, yes, I absolutely would. I have a clear picture in my head of what I would need in terms of the EdTech and what sorts of writing work I would (and wouldn’t) use it for.

Do I think higher ed is ready for widespread adoption of these tools? That’s a harder question. Teaching this way requires a new skillset. Higher ed has abysmally under-resourced professional development support for teaching, on the whole. Also, teaching this way isn’t what our PhD system trains young academics to aspire to. Many will see it as a dumbing down of the work they have dedicated their lives to. And if implemented poorly, it can easily turn into that.

So will AI text generation tools revolutionize or kill college writing? Both! Neither! For sure! Probably! Eventually! Somewhat! It’s…complicated.

As usual.

Time Stamp:

More from Eliterate