Clicky
Artificial intelligence

ChatbotGPT worries teachers looking to detect cheating with AI

Comment

Teachers and professors across the education system are in near panic as they face an artificial intelligence revolution that could enable cheating on a massive scale.

Source is ChatGPT, an artificial intelligence bot released a few weeks ago that allows users to ask questions and, moments later, receive well-written answers that are eerily human.

Almost immediately, educators began experimenting with the tool. Even though the bot’s answers to academic questions weren’t perfect, they were terribly close what teachers expect of many of their students. How long, educators wonder, will it be before students start using the site to write essays or computer code for them?

Māra Corey, an English teacher at Irondale Senior High School in New Brighton, Minnesota, said she discussed the issue with her students almost immediately so they could understand how the use of tool could interfere with their learning.

“Some of them were shocked that I knew about it,” she said. She wasn’t afraid that the conversation might plant bad ideas in their heads. “Hoping teens don’t notice the flashy new thing that will save them time is a fool’s errand.”

Stumbling with their words, some people let the AI ​​do the talking

Within days of its launch, over a million people had tried ChatGPT. Some asked innocent questions, like how to explain to a 6-year-old that Santa Claus doesn’t exist. Other queries required complex answers, such as finishing a tricky piece of software code.

For some students, the temptation is obvious and enormous. A senior at a school in the Midwest, who spoke on condition of anonymity for fear of expulsion, said he had already used the text generator twice to cheat on his schoolwork. He got the idea after seeing people explaining on Twitter how powerful the word generator is after it was released on November 30.

He was watching a home computer quiz that asked him to define certain terms. He put them in the ChatGPT box and, almost immediately, the definitions came back. He wrote them by hand on his quiz paper and submitted the assignment.

Later that day, he used the generator to help him write a piece of code for an assignment question for the same class. He was puzzled, but ChatGPT was not. It brought up a text string which worked perfectly, he said. After that, the student said he got hooked and planned to use ChatGPT to cheat on exams instead of Chegg, a homework help website he used in the past.

He said he’s not worried about getting caught because he doesn’t think the professor can tell his answers are computer-generated. He added that he had no regrets.

“It’s kind of up to the professor to make better questions,” he said. “Use it to your advantage. … Don’t take a whole course on it.

What is ChatGPT, the viral AI of social networks?

The tool was created by OpenAI, an artificial intelligence lab launched several years ago with funding from Elon Musk and others. The bot is powered by a “big language model”, AI software that is trained to predict the next word in a sentence by analyzing huge amounts of internet text and finding patterns through trial and error. ChatGPT has also been fine-tuned by humans to make its responses more conversational, and many have noted its ability to produce often humorous and even philosophical paragraphs.

Yet some of his answers have been blatantly wrong or bigoted, such as when a user made write a rap text which read, “If you see a woman in a lab coat, she’s probably just there cleaning the floor.” The creators recognize that ChatGPT is not perfect and can give misleading answers.

Educators assume that over time the tool will improve and students will know more about it. Some say teachers will adjust their assessments to account for the possibility of cheating. For example, they will require students to write papers by hand or during class when they can be supervised. Others consider drafting questions that require more thought, which is more difficult for the bot.

The stakes are high. Many teachers agree that learning to write can only take place when students grapple with ideas and put them into sentences. Students start out not knowing what they want to say, and as they write they understand it. “The writing process transforms our knowledge,” said Joshua Wilson, an associate professor at the University of Delaware’s School of Education. “It will be completely wasted if you just jump to the final product.”

Wilson added that while universities were buzzing about it, many high school teachers remained blissfully unaware.

“The average K-12 teacher – they’re just trying to get their [semester-end] grades,” he said. “It’s definitely a wave that’s going to hit.”

Teachers say pandemic tech has changed their jobs forever

Department chairs at Sacred Heart University in Connecticut have discussed how to deal with artificial intelligence before, and faculty members know they need to find ways to deal with it, said David K. Thomson, associate professor of history at the school.

Thomson said he found by experimenting with the site that it answered fairly well the type of questions that appear on many take-home tests, such as one asking the student to compare the development of colonies in North and South America before the economic revolution. and other terms. “It wasn’t perfect,” he said. “Students aren’t perfect either.”

But when he asked her a more sophisticated question, like how Frederick Douglass made his case against the institution of slavery, the answer was much less convincing. Professors, he said, will have to give assessments that judge analytical reasoning and not just facts that can be consulted.

At the same time, others see possible benefits. The technology is an opportunity for teachers to think more deeply about the assignments they assign — and to talk to students about the importance of creating their own work — said Joshua Eyler, an assistant professor at the University of Mississippi who leads the Center of Excellence in Teaching & Learning, which derisively denounced a “moral panic”.

“It’s sort of calculator time for teaching handwriting,” Eyler said. “Just as calculators have changed the way we teach math, so is a similar time for teaching handwriting.”

“As you might expect, what we’ve seen is a sort of moral panic. There is a great fear that students will use these tools to cheat.

Michael Feldstein, educational consultant and editor of the e-Literate blog, said that in addition to panic, there was curiosity among educators. He said some professors in business-focused fields see AI-generated writing as a potentially useful tool. A marketing student could use it to write marketing texts at school, he said, and also in a future job. If it works, he asked, what’s wrong with it?

“They don’t care if the students will be the next Hemingway. If the goal is communication, it’s just another tool,” Feldstein said. The most important thing, he said, is that the tool is used as part of learning, not in place of learning.

As educators think about how to live with technology, some companies are thinking about ways to beat it.

Turnitina company that created software widely used to detect plagiarism, is now investigating how it could detect AI-generated material.

Automated essays differ from student-written assignments in many ways, according to company officials. Students write in their own voice, which is missing from ChatGPT content. Essays written by AI sound like the average person, but any given student isn’t perfect on average, so essays won’t sound like them, said Eric Wang, vice president of AI at Turnitin.

“They tend to be probabilistically vanilla,” he said.

Distance learning apps shared children’s data on a ‘dizzying scale’

But detecting cheaters using technology will be difficult.

Sasha Luccioni, a researcher at open-source AI startup Hugging Face, said OpenAI should allow the public to browse ChatGPT’s code, because only then can scientists build truly robust tools. to catch cheaters.

“You work with a black box,” she says. “Unless you really have [access to] these layers and how they are connected, it’s really hard to make sense [cheating detection] tool.”

Hugging Face hosts a detection tool for an older chatbot model, called GPT-2, and said it could potentially help teachers detect ChatGPT text, but would likely be less accurate for newer models.

Scott Aaronson, visiting scholar at OpenAI, said the company is exploring different ways to combat abuse, including the use of watermarks and patterns that differentiate bot-generated text from real-world text. Some questioned if the watermark approach is sufficient.

“We are still conducting experiments to determine the best approach or combination of approaches,” Aaronson said in an email.

ChatGPT had its own ideas about the solution. When asked how to deal with the possibility of cheating, the bot offered several suggestions: educate students about the consequences of cheating, monitor exams, make questions more sophisticated, give students the support they need to that they don’t see the need to cheat.

“Ultimately, it’s important to communicate clearly with students about your expectations for academic integrity and to take steps to prevent cheating,” the bot explained. “It can help create a culture of honesty and integrity in your classroom.”

//platform.twitter.com/widgets.js

Leave a Reply