Artificial intelligence

AI Expert Says Technology Will Destroy Humanity

By Jessica Goudreault
| Update 6 minutes ago

Hi Terminator

Artificial Intelligence (AI) is an exciting new technology that has people around the world wondering how it will affect the world we live in. Some believe it will improve human safety and well-being, while others believe it will bring about an eventual apocalypse. Eliezer Yudkowsky, the famed and controversial AI theorist, recently said he believes the technology will destroy humanity, according to Futurism.

While Eliezer Yudkowsky has been telling the world for decades that we have to be careful with AI, he now says “I think we’re not ready, I think we don’t know what we’re doing, and I think we’re all going to die.Earlier this year, in an op-ed for Time, the artificial intelligence researcher suggested that we stop AI development altogether and that we should destroy a rogue data center.

Those who laughed at his concerns years ago may soon seek his advice on how to stop the AI ​​apocalypse.

“I think we’re not ready, I think we don’t know what we’re doing and I think we’re all going to die.”

Eliezer Yudkowsky

Yudkowsky’s biggest concern is that we don’t fully understand the technology we’ve created. Take OpenAI ChatGPT-4 for example, we can’t really get into the technology to see the kind of math that’s going on behind the scenes. Instead, we can only theorize what’s going on inside.

More experts concerned about AI than ever before

Yudkowsky is not alone in this concern. In fact, more than 1,100 AI experts, CEOs, and researchers, including Elon Musk and “godfather of AI” Yoshua Bengio, call for a temporary moratorium on advances in AI. This will give them time to assess how powerful AI can become and decide whether or not we should start setting limits on the technology.

artificial intelligence
I robot

Artificial intelligence experts want to create security protocols that will be monitored by outside experts to ensure the technology doesn’t spin out of control. The goal is to create AI technology that is safe, reliable, accurate and, above all, fair. This will help ensure that humanity is not taken over by robots like the countless movies we’ve seen, like Me, Robot, Blade Runner, and the whole terminator franchise.

Earlier this year, the artificial intelligence researcher suggested stopping AI development altogether.

Unfortunately, not everyone is on this same page. There are currently AI lab employees working hard to try to create the most powerful AI technology that will perform better than their competitors. At the rate they are going, they could create something too powerful for them to even predict or comprehend, which could spell the downfall of humanity.

Other imminent risks associated with AI are the possibilities of mass plagiarism, its impact on the environmental footprint, and its ability to take jobs away from hard-working humans. More recently, Hollywood actors and writers have gone on strike in an attempt to protect their jobs from the threat of AI. In the near future, we could see technologies like ChatGPT writing the next big script in Hollywood and replacing real human actors with CGI likenesses.

While it’s fun to wonder what the AI ​​apocalypse might look like, it’s just as important to listen to some of the doomsday believers like Yudkowsky, because they just might be onto something.

Leave a Reply