Should We Heed AI Experts’ Warnings To Stop NOW?

Share This:

From “technocracy.news”

99% of the population have no concept of what AI actually is or what kind of threat it poses to humanity. When true experts in the industry issue stark warnings to stop AI development, the rest of the world still has no concept of why anyone should listen to them, and so development continues unabated and unchecked. ⁃ TN Editor

A TOP AI expert has issued a stark warning over the potential for world extinction that super-smart AI technology could bring.

Eliezer Yudkowsky is a leading AI researcher and he claims that “everyone on the earth will die” unless we shut down the development of superhuman intelligence systems.

The 43-year-old is a co-founder of the Machine Intelligence Research Institute and (MIRI) and claims to know exactly how “horrifically dangerous this technology” is.

He fears that when it comes down to humans versus smarter-than-human intelligence – the result is a “total loss”, he wrote in TIME.

As a metaphor, he says, this would be like a “11th century trying to fight the 21st century”.

In short, humans would lose dramatically.

On March 29, leading experts from OpenAI submitted an open letter called “Pause Giant AI Experiments” that demanded an immediate six month ban in the training of powerful AI systems for six months.

It has been signed by the likes of Apple’s co-founder Steve Wozniak and Elon Musk.

However, the American theorist says he declined to sign this petition as it is “asking for too little to solve it”.

The threat is so great that he argues that extinction by AI should be “considered a priority above preventing a full nuclear exchange”.

He warns that the most likely result of robot science is that we will create “AI that does not do what we want, and does not care for us nor for sentient life in general.”

We are not ready, Yudkowsky admits, to teach AI how to be caring as we “do not currently know how”.

Instead, the stark reality is that in the mind or a robot “you are made of atoms that it can use for something else”.

“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”

Yudkowsky is keen to point out that presently “we have no idea how to determine whether AI systems are aware of themselves”.

What this means is that scientists could accidentally create “digital minds which are truly conscious” and then it slips into all kinds of moral dilemmas that conscious beings should have rights and not be owned.

Our ignorance, he implores, will be our downfall.

Read full story here…

Share This: