16 January 2024

Reporting back on new assignment

When I set a new assignment at the beginning of term, I said I would report back on how it went. Originally, that assignment had been that the students were asked to write a Conversation style article about the topic in marine geology. But the thing is, nowadays you can just ask AI to do that. So we had to change the assignment. And we did exactly that: ask AI to write it. What we asked the students to do was critique these texts. What was wrong in them? Where were they not giving enough data? Were the citations that the AI was using suitable?

We now know how it went. And the results were a mixed bag! And that is a good thing; if either all students do a sterling job, or really bad one, the assignment is clearly not fit for purpose. It needs to get a range of results. And it did.

One thing that seemed to be a good predictor of how good job a student had done was whether they had checked the references. Several students hadn't done that. AI engages in hallucinations, and just makes stuff up. And that includes scientific references. That is a really bad thing, of course! Everything stands or falls with how well it is backed up by evidence, and if your evidence is made up, what you have produced is by the definition rubbish. And most of the references AI had made for me (or for Jaco) were indeed made up. Just looking at the title of a reference is not good enough to see if it is actually suitable. And those who hadn't didn't end up scoring very well. Luckily, most students had noticed that most references were fake, and that the ones that were correct were often not suitable in the context they were used. 

The students also varied in how good they were picking up the inaccuracies. As expected. And these could range from blatant nonsense to a bit of an overstatement, or a subtle omission. 

We also asked them for additions. Unfortunately, quite a number of students only said things like "there needs to be more data here" or "it would be good to have some concrete examples here" without providing that actual data or these examples. That is a missed opportunity. But some students really dived in and bulked everything out with hard evidence, and an impressive reference list. That was great to see!

One thing I noticed was that students were a bit reluctant to dismiss text. In the text about seafloor mapping, there was mention of electrical methods. But if you are floating around in a saline solution, electricity isn't of much use to you as a mapping tool. This was clearly AI nonsense. Electrical methods are totally used, but generally either on land, or inside boreholes. But borehole analysis is something other than mapping. But few students were willing to boot that paragraph out. One student had even found an article where it was used, but that involved actual physical probes being placed inside the sediment. And the article mentioned this was only done in a few metres water depth. This is not really what you are thinking about when you think of seafloor mapping. Maybe they were treating the text with a little bit too much reverence?

Some of the text made me smile. One student went full pedantic, and was picking on the whether particular CO2 storage projects were within a country’s territorial waters or not. Another one was commenting on damage to undersea cables by ocean acidification. He said that by the time the ocean was that acidic it would be significant, we would have other things to worry about than the cables. He had a point!

So altogether, I think it was an initial success. But we now know a bit better what to expect, so we can adjust the instructions a bit to the first results. But who knows! Maybe next year AI will actually be able to write flawless texts…


It looks so credible! But is it?




No comments: