Artificial intelligence

ChatGPT AI Breakthrough Sounds the Alarm on Student Cheating

Universities are urged to guard against the use of artificial intelligence to write essays after the emergence of a sophisticated chatbot capable of mimicking academic work, leading to a debate over better ways to write essays. evaluate students in the future.

ChatGPTA program created by Microsoft-backed OpenAI that can form arguments and write persuasive essays has raised concerns that students are using the software to cheat on written assignments.

Academics, higher education consultants, and cognitive scientists around the world have suggested that universities develop new modes of assessment in response to the threat to academic integrity posed by AI.

ChatGPT is a large language model trained on millions of data points, including large chunks of text and books. It produces convincing and consistent answers to questions by predicting the plausible next word in a sequence of words, but often its answers are inaccurate and require fact-checking.

When you ask the program to produce a reading list on a particular topic, for example, it can generate fake references.

This week, around 130 university representatives attended a seminar organized by JISC, a UK-based charity that advises higher education on technology. They were told that a “war between plagiarism software and generative AI will not help anyone” and that the technology could be used to improve writing and creativity.

The wide accessibility of this tool, which is free to the public, has raised concerns about whether it makes testing redundant or requires additional resources to mark content.

Turnitin is software used by approximately 16,000 school systems worldwide to detect plagiarized work and can identify certain types of AI-assisted writing. The U.S.-based company is developing a tool to guide educators in assessing work with “traces” of it, said Annie Chechitelli, product manager at Turnitin.

Chechitelli also warned of an “arms race” over detecting cheaters and said educators should encourage human skills such as critical thinking and editing.

An overreliance on online tools could impact development or creativity. A 2020 study by Rutgers University suggested that students who Google their assignments score lower on exams.

“Students will not automatically earn Aces for submitting AI-generated content; he’s more workaholic than Einstein,” said Kay Firth-Butterfield, head of artificial intelligence at the World Economic Forum in Davos, who added that the technology would improve rapidly.

Academics have warned that education has been slow to respond to these tools. “The education system as a whole is only just beginning to realize this, [but it is] the same kind of problem as cell phones at school. The response has been to ignore it, reject it, ban it, and then try to adapt it,” said Mike Sharples, professor emeritus at the Open University and author of Story Machines: How Computers Became Creative Writers.

Moving to more interactive assessments or reflective work could be costly and difficult for an already cash-strapped sector, said higher education consultant Charles Knight.

“Part of the reason the written essay is so successful is economics,” he added. “If you do [other] assessment, cost and time required increase.

Universities UK, which represents the sector, said it was watching closely but not actively working on the issue, while Australia’s independent higher education regulator TEQSA said institutions must be clear about their rules and communicate them. to students.

“Learning is a process, it’s not the end result in many cases, and a trial run isn’t helpful in many jobs,” said Rebecca Mace, digital philosopher and education researcher at the Institute. of Education from UCL.

Leave a Reply