1
Just Chat! / The fair use of LLM and AI in education.
« on: 14/04/2024 02:45:59 »
Hi.
Did you know that a lot of Universities run quite a lot of submitted material, like essays, through a computer to check for plagiarism? You probably did know that.
Did you know they now also include a quick assessment of the probability that this content was substantially created by some LLM (Large language Model - like Gemini, ChatGPT etc.)? You may have known that but LLM's are quite new so they've only recently started it.
If you submit an essay with a high chance of having been generated by a LLM then it can be rejected or the person could be called in for a viva examination.
However, it seems to be bit one sided. There's one student I know who felt strongly that the feedback and comments on their essays were very generic and not especially useful. They ran the feedback through a LLM checker and - well, you guessed it. Perhaps they should reject the feedback and insist the marker comes in for a face-to-face meeting where the essay is properly discussed for 10 minutes?
On the other hand, perhaps it's not all bad. Eduaction may end up becoming a game or a test to see who can game the system better - but we will have grauates who can play the game very, very well. Students will be working not just to prevent triggering an LLM detection but to also get future iterations of their LLM assisted essays adjusted in line with output generated by the markers LLM system. This will help the markers AI system pick up key elements and hopefully grade it more highly. We'll have future graduates who are highly skilled with this new LLM technology.
That will be important because LLM and AI will only continue to develop, so our new graduates will need to be good with this technology. There's got to be a bright side to everything.
Best Wishes.
Did you know that a lot of Universities run quite a lot of submitted material, like essays, through a computer to check for plagiarism? You probably did know that.
Did you know they now also include a quick assessment of the probability that this content was substantially created by some LLM (Large language Model - like Gemini, ChatGPT etc.)? You may have known that but LLM's are quite new so they've only recently started it.
If you submit an essay with a high chance of having been generated by a LLM then it can be rejected or the person could be called in for a viva examination.
However, it seems to be bit one sided. There's one student I know who felt strongly that the feedback and comments on their essays were very generic and not especially useful. They ran the feedback through a LLM checker and - well, you guessed it. Perhaps they should reject the feedback and insist the marker comes in for a face-to-face meeting where the essay is properly discussed for 10 minutes?
On the other hand, perhaps it's not all bad. Eduaction may end up becoming a game or a test to see who can game the system better - but we will have grauates who can play the game very, very well. Students will be working not just to prevent triggering an LLM detection but to also get future iterations of their LLM assisted essays adjusted in line with output generated by the markers LLM system. This will help the markers AI system pick up key elements and hopefully grade it more highly. We'll have future graduates who are highly skilled with this new LLM technology.
That will be important because LLM and AI will only continue to develop, so our new graduates will need to be good with this technology. There's got to be a bright side to everything.
Best Wishes.