New chatbots could change the world. Can you trust them?

This month, Jeremy Howard, an artificial intelligence researcher, presented an online chatbot called ChatGPT to her 7 year old daughter. It had been released a few days earlier by OpenAI, one of the world’s most ambitious AI labs.

He told her to ask the experimental chatbot anything that came to mind. She asked what trigonometry was used for, where black holes came from and why hens incubated their eggs. Each time, he replied in clear, well-punctuated prose. When she asked for a computer program that could predict the trajectory of a ball thrown through the air, she was also given that.

Over the next few days, Mr. Howard – a data scientist and professor whose work inspired the creation of ChatGPT and similar technologies – came to see the chat bot as a new type of personal tutor. He could teach his daughter math, science and English, not to mention a few other important lessons. Chief among them: Don’t believe everything you’re told.

“It’s a pleasure to watch her learn like this,” he said. “But I also told him: don’t trust everything it gives you. He can make mistakes.

OpenAI is one of many companies, academic labs, and independent researchers working to create more advanced chatbots. These systems can’t exactly chat like a human, but they often seem. They can also retrieve and repackage information at a speed humans never could. They can be thought of as digital assistants – like Siri or Alexa – that are better able to understand what you’re looking for and give it to you.

After the release of ChatGPT – which has been used by over a million people – many experts believe that these new chatbots are set to reinvent or even replace internet search engines like Google and Bing.

They can provide information in tight sentences, rather than long lists of blue links. They explain concepts in a way that people can understand them. And they can provide facts, while generating business plans, essay topics, and other new ideas from scratch.

“You now have a computer that can answer any question in a way that makes sense to a human,” said Aaron Levie, chief executive of Silicon Valley company Box, and one many executives exploring the ways these chatbots will change. the technological landscape. “He can extrapolate and take ideas from different contexts and merge them.”

The new chat bots do this with what seems like complete trust. But they don’t always tell the truth. Sometimes they even fail in simple calculation. They mix fact and fiction. And as they get better, people could use them for generate and spread untruths.

Google recently built a system specifically for conversation called LaMDA, or Language Model for Dialogue Applications. This spring, a Google engineer claimed he was sensitive. It was notbut it captured the public’s imagination.

Aaron Margolis, a data scientist in Arlington, Virginia, was among the limited number of non-Google people allowed to use LaMDA through an experimental Google app, AI Test Kitchen. He has always been amazed by her talent for open conversation. It amused him. But he warned it could be a fabulist – as you would expect from a system formed from vast amounts of information published on the internet.

“What it gives you is kind of like an Aaron Sorkin movie,” he said. Mr Sorkin wrote “The Social Network”, a film often criticized for stretching the truth about the origin of Facebook. “Some parts will be true, and some parts will not be true.”

He recently asked LaMDA and ChatGPT to chat with him as if it were Mark Twain. When he asked LaMDA, he quickly described an encounter between Twain and Levi Strauss, and said the writer had worked for the bluejeans mogul while living in San Francisco in the mid-1800s. true. But that was not the case. Twain and Strauss lived in San Francisco at the same time, but they never worked together.

Scientists call this problem “hallucination”. Much like a good storyteller, chatbots have a way of taking what they’ve learned and turning it into something new — without worrying about whether it’s true.

LaMDA is what artificial intelligence researchers call a neural network, a mathematical system loosely modeled on the neural network of the brain. It is the same technology that translated between French and English on services like Google Translate and identifies pedestrians as self-driving cars navigate the city streets.

A neural network learns skills by analyzing data. By spotting patterns in thousands of cat photos, for example, he can learn to recognize a cat.

Five years ago, researchers at Google and labs like OpenAI began designing neural networks that analyzed huge amounts of digital text, including books, Wikipedia articles, news reports, and online chat logs. Scientists call them “big language patterns.” Identifying billions of distinct patterns in the way people connect words, numbers and symbols, these systems have learned to generate text on their own.

Their ability to generate language has surprised many researchers in the field, including many of the researchers who built them. The technology could mimic what people had written and combine disparate concepts. You can ask him to write a “Seinfeld” scene in which Jerry learns an esoteric mathematical technique called the bubble sort algorithm – and it would be.

With ChatGPT, OpenAI has worked to refine the technology. It doesn’t do fluent conversation as well as Google’s LaMDA. It was designed to work more like Siri, Alexa, and other digital assistants. Like LaMDA, ChatGPT was trained on a sea of ​​digital text extracted from the Internet.

As people tested the system, he asked them to rate his answers. Were they convincing? Were they helpful? Were they truthful? Then, using a technique called reinforcement learninghe used the notations to refine the system and define more precisely what it would and would not do.

“It allows us to get to the point where the model can interact with you and admit when it’s wrong,” said Mira Murati, OpenAI’s chief technology officer. “He can reject something that’s inappropriate, and he can challenge an incorrect question or premise.”

The method was not perfect. OpenAI warned those using ChatGPT that it “may sometimes generate incorrect information” and “produce harmful instructions or biased content”. But the company plans to continue refining the technology and reminds people who use it that this is still a research project.

Google, Meta, and other companies also deal with accuracy issues. Meta recently deleted an online preview of its chat bot, Galactica, as it repeatedly generated incorrect and biased information.

Experts have warned that the companies do not control the fate of these technologies. Systems like ChatGPT, LaMDA, and Galactica are based on ideas, research papers, and computer code that have been circulating freely for years.

Companies like Google and OpenAI can advance technology at a faster pace than others. But their latest technologies have been replicated and widely disseminated. They cannot prevent people from using these systems to spread misinformation.

Just as Mr Howard hoped his daughter would learn not to trust everything she read on the internet, he hoped society would learn the same lesson.

“You could program millions of these robots to look like humans, having conversations designed to win people over to a particular point of view,” he said. “I’ve warned against this for years. Now it is obvious that this is just waiting to happen.

//platform.twitter.com/widgets.js

Leave a Reply