“Please Slow Down” – The 7 Biggest AI Stories of 2022

Enlarge / Advances in AI image synthesis in 2022 have made possible images like this, which was created using Stable Diffusion, enhanced with GFPGAN, extended with DALL-E, and then manually composited.

Benj Edwards / Ars Technica

More than once this year, AI experts have say again a familiar refrain: “Please slow down.” AI news in 2022 has been swift and relentless; by the time you knew where things stood with AI, a new paper or discovery would render that understanding obsolete.

In 2022, we have probably reached the curve knee when it comes to generative AI that can produce creative works consisting of text, images, audio, and video. This year, deep learning AI emerged from a decade of research and began to make its way into commercial applications, allowing millions of people to try the technology for the first time. AI creations have inspired wonder, created controversy, caused existential crises and turned heads.

Here’s a look back at the seven biggest AI stories of the year. It was hard to pick just seven, but if we didn’t cut it somewhere, we’d still be writing about this year’s events well into 2023 and beyond.

April: DALL-E 2 dreams in pictures

A DALL-E example of
Enlarge / A DALL-E example of “an astronaut on horseback”.

Open AI

In April, OpenAI announced DALL-E 2, a deep learning image synthesis model that has blown away its seemingly magical ability to generate images from text prompts. Trained on hundreds of millions of images extracted from the Internet, DALL-E 2 was able to create new combinations of images thanks to a technique called latent diffusion.

Twitter was quickly filled with images of astronauts on horseback, teddy bears wandering around ancient Egypt, and other near-photorealistic works. We last heard about DALL-E a year ago when version 1 of the model had struggled to render a low resolution lawyer’s chair – suddenly version 2 was illustrating our wildest dreams at 1024×1024 resolution.

At first, given concerns about misuse, OpenAI only allowed 200 beta testers to use DALL-E 2. Content filters blocked violent and sexual prompts. Gradually, OpenAI allowed more than one million people to participate in a closed trial, and DALL-E 2 finally became available for everyone at the end of September. But by then, another contender in the world of latent scattering had arisen, as we’ll see below.

July: Google engineer thinks LaMDA is sensitive

Blake Lemoine, former Google engineer.
Enlarge / Blake Lemoine, former Google engineer.

Getty Images | Washington Post

In early July, the Washington Post broke news that a Google engineer named Blake Lemoine was put on paid leave because of his belief that Google’s LaMDA (Language Model for Dialogue Applications) was sensitive and deserved equal rights to a human.

While working at Google’s Responsible AI organization, Lemoine began chatting with LaMDA about religion and philosophy and believed he saw some real intelligence behind the text. “I know a person when I talk to them,” Lemoine told the Post. “It doesn’t matter if they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that’s how I decide what is and is not a person.”

Google responded that LaMDA was just telling Lemoine what he wanted to hear and that LaMDA was actually not responsive. Like the GPT-3 text generator tool, LaMDA had already been trained on millions of books and websites. He responded to Lemoine’s entry (a prompt, which includes the full text of the conversation) by predicting the most likely words that should follow without any further understanding.

On the way, Lemoine allegedly violated Google’s privacy policy by informing others of its group’s work. Later in July, Google fired Lemoine for violating data security policies. He wasn’t the last person in 2022 to get caught up in the hype about the great language model of an AI, as we’ll see.

July: DeepMind AlphaFold predicts nearly every known protein structure

Schematic of protein ribbon patterns.
Enlarge / Schematic of protein ribbon patterns.

In July, DeepMind announcement that his AlphaFold AI model predicted the shape of nearly every protein known to nearly every organism on Earth with a sequenced genome. Originally announced in the summer 2021, AlphaFold previously predicted the shape of all human proteins. But a year later, its protein database has grown to contain more than 200 million protein structures.

DeepMind has made these predicted protein structures available in a public database hosted by the European Molecular Biology Laboratory’s European Institute of Bioinformatics (EMBL-EBI), allowing researchers around the world to access and use the data. for research related to medicine and biology. science.

Proteins are building blocks of life, and knowing their shapes can help scientists control or modify them. This is particularly useful when developing new drugs. “Almost all the drugs that have come to market in recent years have been designed in part through knowledge of protein structures,” said Janet Thornton, Principal Scientist and Director Emeritus at EMBL-EBI. That makes knowing them all a big deal.

//platform.twitter.com/widgets.js

Leave a Reply