Demis Hassabis, CEO of Google DeepMind, stated that the risks of artificial intelligence They must be treated with the same seriousness as climate change. The expert assured that the world must act soon to avoid a scenario similar to that of global warming. Hassabis proposed the creation of a body similar to the Intergovernmental Panel on Climate Change (IPCC).
In an interview with Guardian, the head of Google DeepMind warned about some risks posed by the development of this technology. AI could assist in the creation of biological weapons or threaten the existence of human beings in case we develop artificial general intelligence (AGI). The above has been addressed on multiple occasions by other experts, so it is time to act.
“We must take the risks of AI as seriously as other major global challenges, such as climate change,” Hassabis said. “I think we have to start with something like the IPCC, where it is a scientific and research agreement with reports, and then move forward from there.” The statements of the head of DeepMind echo the proposal of Eric Schmidt, former CEO of Google and defender of this technology.
A few weeks ago, Schmidt and Mustafá Soliman (co-founder of DeepMind), They published a letter in which they requested the creation of an international panel on AI safety. Both mention that before implementing excessive regulation, governments and the public need to know what the most crucial risks are. Schmidt and Suleyman consider that the IPCC would serve as inspiration in the creation of this organization.
AI requires an independent body to address risks
Demis Hassabis is in favor of regulating artificial intelligence. The manager points out that supervision could be carried out through a group of experts, like the IPCC. The intergovernmental organization aims to provide an objective and scientific opinion on climate change, its risks and mitigation options.
A similar panel, but focused on artificial intelligence, would collect data, submit reports and monitor potential hazards and potential solutions. This body would be made up of researchers, scientists and technology experts. Governments would then take this information to apply the necessary regulations.
The head of Google DeepMind not only thinks of a panel similar to the IPCC, but also other organizations such as CERN or the International Atomic Energy Agency.
“What I would like to see eventually is an equivalent of a CERN for AI safety that does research on that, but on an international level,” he mentioned. “And then maybe one day there will be some kind of equivalent of the IAEA, which actually audits these things.”
Demis Hassabis and other experts will participate in the first global summit on artificial intelligence security in the United Kingdom. The objective of this meeting will be address risks from misuse and loss of control. The first of them will focus on the ways in which an AI can help commit crimes, while the second poses a Skynet-like scenario, where systems turn against us.
The AI Security Summit will take place on November 1-2 at Bletchley Park, the same site where Alan Turing cracked the Enigma code during World War II.