Universities are being urged to safeguard against the use of artificial intelligence to write essays after the emergence of a sophisticated chatbot that can imitate academic work, leading to a debate over better ways to evaluate students in the future.
ChatGPT, a program created by Microsoft-backed company OpenAI that can form arguments and write convincing swaths of text, has led to widespread concern that students will use the software to cheat on written assignments.
Academics, higher education consultants and cognitive scientists across the world have suggested universities develop new modes of assessment in response to the threat to academic integrity posed by AI.
ChatGPT is a large language model trained on millions of data points, including large chunks of text and books. It produces convincing and coherent replies to questions by predicting the next plausible word in a sequence of words, but often its answers are inaccurate and require fact-checking.
When you ask the program to produce a reading list on a particular topic, for example, it can generate fake references.
This week, about 130 university representatives attended a seminar by JISC, a UK-based charity that advises higher education on technology. They were told a “war between plagiarism software and generative AI won’t help anyone” and the technology could be used to enhance writing and creativity.
The wide accessibility of this tool, which is free to the public, has led to concerns about whether it makes essays redundant or requires extra resources to mark content.
Turnitin is software used by around 16,000 school systems globally to detect plagiarised work and can identify some kinds of AI-assisted writing. The US-based company is developing a tool to guide educators in assessing work with “traces” of it, said Annie Chechitelli, chief product officer at Turnitin.
Chechitelli also warned against an “arms race” on detecting cheaters and said educators should encourage human skills such as critical thinking and editing.
Over-reliance on online tools could impact development or creativity. A study in 2020 by Rutgers University suggested that students who Google answers to their homework get lower grades in exams.
“Students are not going to be getting automatic As by submitting AI-generated content; it is more of a workhorse than Einstein,” said Kay Firth-Butterfield, head of artificial intelligence at the World Economic Forum in Davos, who added that the technology would rapidly improve.
Academics have warned that education has been slow to respond to these tools. “The education system as a whole is just waking up to this, [but it is] the same sort of issue as mobile phones in school. The response was ignoring it, rejecting it, banning it and then trying to accommodate it,” said Mike Sharples, emeritus professor at the Open University and author of Story Machines: How Computers Have Become Creative Writers.
Moving to more interactive assessments or reflective work could be costly and challenging for an already cash-strapped sector, said Charles Knight, a higher education consultant.
“The reason the written essay is so successful is partly economic,” he added. “If you do [other] assessment, the cost and the time needed increases.”
Universities UK, which represents the sector, said it was watching closely but not actively working on the issue, while the Australian independent regulator of higher education TEQSA said institutions needed to define their rules clearly and communicate them to students.
“Learning is a process, it isn’t about the end result in a lot of cases and an essay isn’t useful in lots of jobs,” said Rebecca Mace, digital philosopher and educational researcher at UCL’s Institute of Education.
- blockchain compliance
- blockchain conference
- Blockchain Consultants
- crypto conference
- crypto mining
- Digital Assets
- machine learning
- non fungible token
- plato ai
- Plato Data Intelligence
- proof of stake