After years of AI-generated content being best known for his comedic absurdity-Alone sometimes drifting towards a disconcerting realism—2022 was the year generative AI finally became a creative force in its own right.
A host of realistic image generators led by OpenAI’s Dall-E 2 research group have made it easy for anyone to create realistic visuals with a simple text prompt. Meanwhile, OpenAI’s ChatGPT has put a conversational interface on top of the organization’s state-of-the-art text generation system, allowing users to simply tell a machine what to write and receive a detailed passage and rhetorical – if not always factually correct – in seconds.
These new systems, trained on datasets spanning hundreds of millions of images and pages of text respectively, have already led to widespread experimentation among brands, agencies, booming startups and creative tool integrations.
But experts say 2023 will be the year marketers and brand agencies begin to take seriously how synthetic content like this can actually be deployed to serve results and augment human creativity. With this proliferation will also come a host of new risks that marketers will have to deal with, machine copyright violations concerns about verifying the authenticity of content.
Mark Curtis, chief innovation officer at Accenture Song and author of the company’s annual Technology Trends Report, said generative AI is probably the most significant technology change he has identified in the past five to last ten years.
“The things that agencies should do go beyond experimentation; they should calculate now what that means for their business,” Curtis said. “It’s a tool that humans will use to kick-start creative thinking or to create the base level of something, which they then continually adapt, or to move forward faster. … It’s not an answer to everything , but it radically changes the economics of much of what we do creatively.
A revolution in robotic writing
OpenAI, the Microsoft-backed research lab that has led the charge in developing the generative AI models that provide the backbone of the technology for the past few years, released ChatGPT late this year.
The new program is based on the group’s former major language models, namely GPT-3, making the tool more conversational. Rather than typing the start of a passage and having the tool complete it, users can now tell ChatGPT what to write with simple text-based directives. The results are often uncannily realistic in terms of mimicking the syntax and style of a given type of writing.
For example, ChatGPT could be asked to write about itself in the style of an Adweek article. The results seem natural enough to pass for a story like this, if perhaps the bot is exaggerating its own accuracy as a customer service tool.
Like GPT-3, ChatGPT has a host of uses, from testing different iterations of copy for digital advertising to creating realistic customer service chatbots and better contextual search tools. Some of these capabilities hinge on being able to limit some of the machine’s unpredictability and inaccuracies, a perennial problem since at least GPT-2’s release in 2019. But various startups and developers are already working to make it more responsive. to actual content. what he spits out or to build tools that work around his oversights.
Zach Kula, director of group strategy at BBDO, said the industry should think less about this tool in terms of replacing humans and more about the different ways it could revolutionize the way creatives do their work. He said it’s clear from his experimentation with the tool that it’s not about to bankrupt agencies.
“In my mind, it doesn’t seem like many people commenting on this have even used the tool,” Kula said. “If they did, it would be obvious that it’s not even close to replacing creative thinking. In fact, I would say it shows how valuable true creative thinking is. This highlights the difference between original creative thought and eloquently constructed database information.
Ethical and practical risks
In addition to the possible benefits, however, generative AI also carries a host of risks that any marketer implementing it should be aware of, including the potential for accidental copyright infringement or plagiarism. Brands will also likely need to defend against fake content such as auto-generated user reviews or mass-generated defamatory content, according to research firm Gartner.
Gartner predicts that by 2027, 80% of enterprise marketers will establish a dedicated content authenticity function to root out AI-generated misinformation. The company also predicts that 70% of enterprise CMOs will include ethical AI responsibility among their top concerns as new regulations and risks develop.
As synthetic content creation tools become more efficient, the risk of synthetic content being produced at scale, whether in the form of deepfake text, image, or video, increases, and marketers will likely have to think twice. how to protect against this guy. misinformation in the future, according to Gartner analyst Bern Elliot.
“Foundation templates reduce the cost of content creation, which means it becomes easier to create deepfakes that closely resemble the original,” Elliot said. “This includes everything from voice and video impersonation to fake art, as well as targeted attacks. The serious ethical concerns at stake could damage reputations or cause political disputes.
Video as the next frontier
Experts say it’s likely that technology like voice cloning, synthetic imagery, and generated copy could line up next year to enable marketers to create lifelike-looking videos complete from whole cloth with AI. These features could make it easier for marketers to create targeted and personalized video ads aimed at different segments at scale.
While current examples of this technology are still rudimentary, Curtis said the pace of technology is accelerating so quickly that it’s hard to know what the state of the technology will look like a year from now.
“Now it’s starting to go to video, and then it’s going to go 3D,” Curtis said. “We had to continually rewrite this trend over the last month and a half because new things were coming in. And I’m afraid whatever we’re going to say won’t be relevant by February.