Artificial intelligence

Reviews | Something AI Can’t Do: Tell a Great Bedtime Story

(Matt Chinworth for the Washington Post)


Artificial intelligence could still prove job eater, destroyer of the world technology from our nightmares. But here’s something they can’t do: tell a compelling story before they go to sleep.

Over the past few months, as tools including Mid Road, Dall-E 2 and Open AI ChatGPT to have connect yourself, I played with them to see how their output compares to the work of children’s book authors and illustrators. The results embody what I’ve come to call the Whimsy Gap: moralizing, but not transporting, stories; images sometimes cute, even beautiful, but in a certain way offwith distorted proportions or elements of an idea squashed to jarring effect.

These limitations remind us that what we get out of AI is a reflection of what we put into it. So far, these tools are limited to the datasets their creators used to teach them about language and images. Whether the AI ​​is moralistic, limited in imagination, or – as in the case of a powerful new avatar generator – inclined to porn, it’s about humans.

What is ChatGPT, the viral AI of social networks?

Take what happened when I asked ChatGPT to “tell the story of a boy who is sent to bed without dinner and imagines escaping to a land of wild things.” The result was impressive, in the sense that it simply existed:

Once upon a time there was a young boy named Peter who was sent to bed without dinner. His parents told him he had been naughty and was too upset to eat. As he lay in bed, he imagined running away to a land of wild things. In this magical place, the trees danced in the wind, the rivers glistened in the sun and the creatures of the forest sang and played. …He imagined befriending some of the wild creatures and they showed him the secret places they liked to play. As he fell asleep, he was filled with a sense of adventure and joy he had never felt before.

But compare that with the children’s book that inspired the prompt: “where the wild things areby Maurice Sendak. This story is slightly longer, 338 words to 216 of ChatGPT (cut off for space in this column). Yet he accomplishes one more world.

The AI ​​didn’t write a single sentence with the temporal power of Sendak’s descriptions – Max sailing “night and day and for weeks and almost over a year”. And he doesn’t use suggestion to evoke a powerful feeling, but rather uses obvious, leaden reinforcement. Naughty Peter is comforted by his fantasy. Max is pulled out of his imaginary domain by the smell of a dinner party his mother kept warm – the implication being that she has reversed her hasty act of discipline and wants him to know he is loved.

Alyssa Rosenberg: To build a delightful library for kids, start with these 99 books

I also tested ChatGPT with a prompt inspired by the modern classic by Mo Willems”Don’t let the pigeon drive the bus.” The Willems pigeon is grumpy, complicit and insistent. ChatGPT, on the other hand, is just another serious test:

Once upon a time, in a bustling city, there was a pigeon named Pete. Pete was an ordinary pigeon, living an ordinary life, until the day he discovered his true passion: driving a city bus. … But as a pigeon, Pete knew his dream of driving a bus was impossible. Pigeons couldn’t drive, and even if they could, no one would trust a pigeon to drive a vehicle as large and complex as a city bus.

These excerpts reveal a boring element of ChatGPT stories: every conclusion has to be a moral. The playfulness came out. The lessons learned are there. The result is inducing a nap, but not in a good way.

When I asked ChatGPT about his tendency to lecture, he replied, “Not all stories need to have a moral or a lesson, but many stories include them.” …Morals are often included in stories to help the reader reflect on their own beliefs and values…” blah, blah, blah you get the picture.

Of course, morality isn’t the only way a reader can “learn and grow” from a book. And the naughty characters are often the liveliest. (I was amused when ChatGPT crashed when I prompted it to recreate the immortal chapter of the closing painting of “The Adventures of Tom Sawyer. »

ChatGPT’s rigidity suggests that artificial intelligences haven’t been exposed to much children’s literature. The AI ​​seems to have no idea what writers like “Curious George” author HA Rey, hardback master Sandra Boynton or even Dr. Seuss look alike.

Then there are the image generators, such as Midjourney, which are trained to create images based on large sets of images pulled from the web, which leads to both ethical and aesthetic issues.

The tools reproduce and remix images from existing artists who aren’t paid or don’t have the ability to consent to the use of their work. Many artists fear, rightly, that a tool that tears up their styles could be used to replace them.

The results I achieved were undeniably inferior to the work of geniuses such as Garth Williams, one of the most ubiquitous children’s book illustrators of the 20th century. But almost more interesting than the visuals was what my prompts showed the AI ​​didn’t know, as measured by its inability to replicate, let alone recognize, a given artist’s style. Midjourney is clearly unfamiliar with the elegant simplicity of Rey’s illustrations or the lively bustle of Peter Spier’s watercolors. He captured the large staring eye of Willems’ famous pigeon, but went overboard with detail and realism in drawing the rest of the bird.

I felt impressed with these tools, but also a little sorry for them. They made me think of weary child prodigies, trotting around to flaunt their genius, dutifully reproducing information they don’t understand, and making frequent mistakes as a result.

These are young technologies. Rather than jailbreak ai tools to simulate conversations between rapper Ye and Adolf Hitler, or wait anxiously for them to become sensitive, why not approach them as good parents would – and talk to them, or read to them, as we do to children? This may be the only chance we have of breathing something like a soul into them.


Leave a Reply