Artificial intelligence

ChatGPT Owner Investigates Risks of Fake Answers

July 13, 2023

Updated July 14, 2023

source of images, Getty Images

US regulators are probing artificial intelligence firm OpenAI over the risks to consumers of ChatGPT generating false information.

The Federal Trade Commission (FTC) sent a letter to the Microsoft-backed company requesting information about how it deals with reputational risks to individuals.

The investigation is a sign of increasing regulatory scrutiny of the technology.

OpenAI chief executive Sam Altman said the company will work with the FTC.

ChatGPT generates compelling human responses to user queries in seconds, instead of the series of links generated by a traditional Internet search. This product, and similar AI products, are expected to dramatically change the way people get the information they seek online.

Tech rivals are rushing to come up with their own versions of the technology, even though it’s generating fierce debate, including over the data it uses, the accuracy of the answers, and whether the company violated copyrights when technology training.

The FTC letter asks what steps OpenAI has taken to address the potential of its products to “generate statements about real individuals that are false, misleading, derogatory, or harmful.”

The FTC is also reviewing OpenAI’s approach to data privacy and how it obtains data to train and inform AI.

Mr Altman said OpenAI spent years researching security and months making ChatGPT “more secure and more aligned before releasing it”.

“We protect user privacy and design our systems to learn about the world, not individuals,” he said on Twitter.

In another tweet he said that it was important to the company that its “technology be safe and pro-consumer, and we are confident that we comply with the law. Of course, we will work with the FTC.”

Mr. Altman appeared before a congressional hearing earlier this year, where he admitted that technology could be a source of errors.

He called for the creation of regulations for the emerging industry and recommended the creation of a new agency to oversee AI safety. He added that he expects the technology to have a significant impact, including on jobs, as its uses become clear.

“I think if this technology goes wrong, it can go wrong…we want to talk about that,” Altman said at the time. “We want to work with the government to prevent this from happening.”

The FTC investigation was first reported by The Washington Post, which published a copy of the letter. OpenAI did not respond to a BBC request for comment.

The FTC also declined to comment. The consumer watchdog has played a prominent role in monitoring tech giants under its current chairman, Lina Khan.

Ms Khan rose to prominence as a Yale law student, when she criticized the US record of anti-monopoly law enforcement linked to Amazon.

Nominated by President Joe Biden, she is a controversial figure, with critics saying she pushes the FTC beyond the limits of its authority.

Some of his most high-profile challenges to tech company operations — including a push to block Microsoft’s merger with gaming giant Activision Blizzard — have faced setbacks in court.

During a five-hour congressional hearing on Thursday, she faced heavy criticism from Republicans over her leadership of the agency.

She did not mention the FTC investigation into OpenAI, which is at a preliminary stage. But she said she had concerns about the release of the product.

“We’ve heard of reports where people’s sensitive information comes up in response to someone else’s request,” Ms Khan said.

“We’ve heard of defamation, defamatory statements, outright false things emerging. This is the type of fraud and deception that concerns us,” she added.

The FTC probe isn’t the company’s first challenge on these issues. Italy banned ChatGPT in April, citing privacy concerns. The service was restored after adding a tool to verify users’ ages and providing more information about its privacy policy.


Leave a Reply