The release of OpenAI ChatGPT chatbot gave us a glimpse into the future of teaching and learning alongside artificial intelligence.
Educators immediately pointed out the ability of the chatbot to generate meaningful answers to assessment and exam questions. And it is often impossible to attribute these answers to a particular source, which makes it difficult to detect plagiarism.
The concerns have not gone unnoticed. Shortly after ChatGPT was released, OpenAI announced that it was development a “digital watermark” to embed in the chatbot responses. This type of watermark is embedded as a digital signal that can identify content as AI-generated and (in theory) should be difficult to remove.
But as experts have pointed out, people will probably find workarounds. And it won’t be long before other equally capable AI models like ChatGPT appear. What will this mean for our educational spaces?
It is safe to say that we can expect difficult years ahead.
But first, what is ChatGPT?
ChatGPT is powered by GPT3, a large language model (LLM) trained on huge amount of textual data – around 500 billion words to be precise. Large language models work by predicting the most likely next word when given a list of input words (usually phrased as a question).
The model is trained using several billion parameters that can represent the complexity of the world. By inputting real-world data, the algorithm readjusts its parameters until it can produce human-like answers to a series of questions. For instance:
How many eggs can a grain of rice contain?
It is not possible for eggs to fit inside a grain of rice, because a grain of rice is much smaller than an egg. A grain of rice is a small, round seed that is usually less than a millimeter in diameter, while an egg is a much larger object that is usually several centimeters in length and width. […]
Even though it is not infallible, the features of ChatGPT both shock and inspire. It can write songs, program code, and simulate entire job interview sessions. He even passed the Amazon Web Services Certified Cloud Practitioner exam, which usually takes 2-6 months to prepare.
Perhaps most alarmingly, the technology is still in its infancy. The millions of users exploring the uses of ChatGPT simultaneously provide more data to OpenAI to improve the chatbot.
The next version of the model, GPT4, will have approximately 100 trillion parameters – about 500 times more than GPT3. This approximates the number of neural connections in the human brain.
How will AI affect education?
The power of AI systems places a huge question mark over our education and assessment practices.
Assessment in schools and universities is primarily based on students providing a product of their learning to be graded, often an essay or written assignment. With AI models, these “products” can be produced to higher standards, in less time, and with very little effort on the part of a student.
In other words, the product provided by a student may no longer provide authentic evidence of the achievement of course learning outcomes.
And that’s not just a problem for written reviews. A published study in February showed that OpenAI’s GPT3 language model significantly outperformed most students in introductory programming courses. According to the authors, this raises “an emerging existential threat to the teaching and learning of introductory programming”.
How should we respond?
Going forward, we will need to think about how AI can be used to support teaching and learning, rather than disrupting it. Here are three ways to do it.
1. Integrate AI into classrooms and lecture halls
History has repeatedly shown that educational institutions can adapt to new technologies. In the 1970s, the rise of portable calculators had math teachers preoccupied by the future of their subject – but it is safe to say that mathematics has survived.
Just as Wikipedia and Google didn’t sound the death knell for ratings, neither did AI. In fact, new technologies lead to new and innovative ways of working. So will learning and teaching with AI.
Rather than being a tool to be banned, AI models should be meaningfully integrated into teaching and learning.
2. Judge students on critical thinking
One thing that an AI model cannot mimic is the treat of learning, and the mental aerobics that entails.
The design of assessments could shift from evaluating only the final product to evaluating the entire process that led a student to it. Emphasis is then placed on the student’s critical thinking, creativity and problem-solving skills.
Students could freely use the AI to complete the task and be graded on their own merit.
3. Evaluate the things that matter
Instead of taking a classroom exam to ban the use of AI (which some might be tempted to do), educators can design assessments that focus on what students need know to succeed in the future. AI, it seems, will be one of those things.
AI models will increasingly be used across industries as technology develops. If students will use AI in their future workplaces, why not test them now?
The dawn of AI
Vladimir Lenin, leader of the Russian Bolshevik Revolution of 1917, supposedly said:
There are decades when nothing happens, and there are weeks when decades happen.
This assertion has established itself in the field of artificial intelligence. AI is forcing us to rethink education. But if we adopt it, it could empower students and teachers.