A new day, a new warning about how artificial intelligence could destroy us as a civilization. This time, it comes from a group of top businessmen, experts and scientists in the sector, led by sam altman, the creator of ChatGPT. They have published a brief statement stating that the potential “Danger of Extinction” which represents the AI it should be treated with the same importance as other global catastrophic events such as nuclear war.
The message in question bears the signature of notorious personalities from the world of artificial intelligence, but is striking for its brevity. There are barely twenty words that summarize an expression of concern, rather than a specific action plan to avoid the supposed annihilation.
“Mitigate the risk of extinction from AI should be a global priority, along with other risks on a societal scale such as pandemics and nuclear war”says the statement. The text Was published on the website of Center for AI Safety, a non-profit organization. Although, to be honest, it even seems to have been written by ChatGPT.
However, there is a reason for such a short message to express such a big warning. According to the organization, many parties are discussing the risks of artificial intelligence today, including experts, legislators, journalists and the general public. However, they consider that it is difficult to communicate in a simple and forceful way the most severe and immediate dangers that technology brings with it. With this message, then, they intend to “open the discussion” and make known who are the specialists who take this problem seriously.
Of course, Sam Altman is not the only signatory to this message. In addition to the creator of ChatGPT, there are names such as Demmis HassabisCEO of Google Deepmind, Dario AmodeiCEO of Anthropic, Emad MostaqueCEO of Stability AI or kevin scott, Microsoft’s chief technology officer. As well as researchers of the stature of Geoffrey Hinton, Yoshua Bengio and lex fridmanamong many others.
Are ChatGPT and other AIs an extinction risk for civilization?
Beyond the curious way that experts have chosen to express their concern about AI, the statement invites you to read between the lines. What they say doesn’t necessarily mean they believe Bard or ChatGPT are capable of becoming Skynet and unleashing the rebellion of the machines.
However, they do argue that the impact of artificial intelligence on the daily lives of millions of people should not be left to chance. Therefore, they consider it necessary to study and implement the necessary safeguards so that, in the future, the AI cannot be applied to destructive methods whose severity is at the same level of a nuclear war or a pandemic. Something that certainly goes hand in hand with the debate on regulating this technology. A discussion that is even being promoted by the industry’s own heavyweights, such as Sam Altman’s startup.
In this sense, the statement published by the Center for AI Safety seems to be moving away from the controversial open letter promoted by Elon Musk. The latter pointed directly to OpenAI and proposed to stop the development of language models more powerful than GPT-4 for 6 months. The experts who accompanied the tycoon considered that the technology behind ChatGPT, Bing or Copilot X could bring “catastrophic effects for society”. The curious thing about the case is that several signatories of that letter, today also accompany the new statement.
The risks of AI

Despite the conciseness of the statement promoted by the creator of ChatGPT and the main minds of the world of AI, there is already a lot of documentation on the risks of this technology. In fact, the Center for AI Safety has collected some examples of how artificial intelligence could be used for nefarious purposes. Some are of current potential application, while others point more towards the future.
In this sense, the possibility of weaponizing the AI is mentioned. “Deep reinforcement learning methods have already been applied to air combat, and machine learning tools for drug discovery could be used to build chemical weapons,” they explain. Besides, there is talk of the use of technology to misinform. Something that we have already seen recently with the photos of the alleged arrest of Donald Trump or the false Minister of Health of Japan.
Another danger posed by artificial intelligence is that the most competent systems may concentrating a lot of power in a very small group of people. Something that, in fact, OpenAI has already been criticized after the furore of tools like Dall-E and ChatGPT. To such an extent that Sam Altman denied that his technology is evolving uncontrollably, and assured that today they don’t even think about developing a GPT-5.
If AI risks really catch up with other catastrophic events, it will be interesting to see how its potential regulation is addressed. Something that is far from depending solely on the goodwill of experts and scientists. The involvement of legislators will also be crucial; and not to mention the companies that today invest billions of dollars. But let’s not forget that the dangers of artificial intelligence are only part of a much larger story, and that technology also encompasses potentially great benefits for humanity. Especially when talking about education and health.
Therefore, as Bill Gates said recently, it is necessary to strike a balance. “We should try to balance fears about the downsides of AI, which are understandable and valid, with its ability to improve people’s lives. To get the most out of this remarkable new technology, we must guard against the risks and spread the benefits. to as many people as possible,” said the Microsoft co-founder.