14 May 2023

ChatGPT at work

It was only in December when I became aware of chatGPT. I figured it would barge its way into my life fairly soon! And it sure did.

If a student submits a piece of work as their own when they haven't actually written it themselves, that is academic malpractice. And I am the person who gets alerted to it if someone suspects that has been going on. And if you, for instance, let a chatbot write your first year essay, that does count as academic malpractice. And we are only a few months down the line, but this has started to happen.

I suppose there are two questions here. The first thing is: is a chatbot not just a tool? In what way is it fundamentally different from using the spellchecker of Microsoft Word? What are we doing if we are not teaching our students to work with the tools available to them?

The second question is: how will we even be able to tell? As things stand, we are not able to conclusively prove use of artificial intelligence in student work. We can only suspect.

The thing about the work that has been flagged up as potentially written by AI is that it isn't very good. I suppose with every day that passes, the tools will get better, but as things stand, chatGPT doesn't do a particularly good job at this specific kind of work. One of my colleagues suspected that an essay draft had been written by AI. He then basically asked chatGPT to write an essay about the very same topic. And the result was eerily similar to that which to student had submitted. And it had nonsense in there (both versions) that someone in the field will immediately spot. 

Another colleague had a similar case. And a third colleague thought she'd put it to the test. She took some genuine student essays, and had a few generated by chatGPT, and then asked us all to have a look and see if we can tell the difference. I think that after 15 guesses by colleagues, only one of them was wrong! So it looks like old-fashioned intelligence can still spot artificial intelligence.

For now I think this gives us the answer to that first question. Yes we need to teach the students that you can use it as a tool in certain circumstances, but we also need to show them what its limitations are. We might not be able to prove use of AI, but we can tell when a language model has just been talking through its hat. So yes, submitting work written by AI will yield a lot more result than not submitting work at all, but if you want to do a decent job, it really pays off to do the thinking yourself!


No comments: