The advances of the Artificial intelligence They have society in a state of astonishment. But a group of experts poses a brutal scenario: there may come a time when what is applauded today, End life on Earth.
Eliezer Yudkowski, researcher and author of topics on Artificial Intelligence, wrote an article for Time magazine, where he throws this harsh warning.
“Many researchers steeped in these issues, including myself, envision that the most likely outcome of building a superhumanly intelligent AI, in circumstances similar to today’s, is that literally everyone on Earth dies.”says.
“Artificial Intelligence He does not care about us or about sentient life in general. adds Yudkowsky.
Elon Musk, Steve Wozniak and more than a thousand figures against Artificial Intelligence
Recently, a group of 1,125 figures from the tech world, including not only Yudkowsky, but also Elon Musk and Steve Wozniak, Apple co-founder, called for a break of at least six months in training the most powerful AI technologies. These are the next step to GPT-4, from OpenAI.
musk, At the time, he stated: “I’m a little worried about the Artificial Intelligence stuff. I think it’s something we should worry about. I think that we need some kind of regulatory authority, or something like that, to oversee the development of the AI and make sure it works in the public interest.”
the billionaire, quoted by Slash Gear, He added that Artificial Intelligence “It’s quite a dangerous technology, I’m afraid I could have done a few things to speed it up.” He does so by referring to the fact that he was part of the creators of OpenAI, although he left the company in 2018.
Eliezer Yudkowsky calls for “an indefinite, worldwide ban” on AI
But if in the letter Yudkowsky signed for a six-month hiatus for Artificial Intelligence training, in the Time article suggests “an indefinite and worldwide ban”, no exceptions for governments and armies.
“If the intelligence services ensure that a country outside all kinds of international agreements is building a GPU cluster, you have to be less afraid of an armed conflict between nations and more of violating the AI development pause”, says the researcher.
Lanza: “You have to be willing to destroy a rebel data center by airstrike.”