OpenAI formed a team that will study the risks that artificial intelligence models would represent from the future. A few days before the start of the first global summit on AI security, the creators of ChatGPT revealed a division focused on evaluating and protecting humanity against catastrophic events, such as chemical and nuclear threats, cyberattacks or autonomous replication.
The team Preparedness (Preparation) will be led by Aleksander Madry, director of the Center for Deployable Machine Learning at the Massachusetts Institute of Technology. Madry, a specialist in machine learning and algorithmic graph theory, will be responsible for track, assess, forecast and protect against “catastrophic risks” of future AI systems.
The technology will focus on the models that can persuade and deceive humans, those who help in the development of chemical, biological and nuclear weapons, or who are a key player in cyber attacks. A special category has to do with artificial general intelligence (AGI), defined by OpenAI as “the most impactful technology humanity has ever invented.”
These superintelligent systems They would help us solve the world’s biggest problems, but they could also lead to the destruction of the human race. According to OpenAI, there is currently no solution to direct or control AI systems smarter than us. The Preparedness Team will analyze the risks of autonomous replication and adaptation.
OpenAI not only poses in near-future scenarios, but also considers those that we have only seen in science fiction novels. Could Aleksander Madry become a key figure in the creation of the blade runners? Hopefully Preparedness will prevent Skynet from going live and become self-aware.
What are the “catastrophic risks” of artificial intelligence, according to OpenAI
According to OpenAI, to manage the risks of future artificial intelligence It must be determined how dangerous these systems are when they are misused. A relevant category that the team led by Madry will focus on will be cybersecurity. The probability that a malicious actor will use artificial intelligence to generate malicious code or carry out cyber attacks It is a latent risk.
OpenAI also considers chemical, biological, radiological and nuclear (CBRN) terrorism. He use of artificial intelligence models to design and manufacture weapons of mass destruction It is a latent fear of the United States. Joe Biden’s administration believes that China could use AI for this purpose, which is why it blocked the sale of chips to that country.
“We believe that cutting-edge AI models, which will surpass the capabilities currently present in the most advanced models, have the potential to benefit all of humanity. But they also pose increasingly serious risks,” OpenAI said.
Another expert who believes that this problem should be addressed is Demis Hassabis, executive director of Google DeepMind. In an interview with GuardianHassabis mentioned that The dangers posed by artificial intelligence are as serious as global warming. “Current AI systems do not pose risks, but future generations may be when they have additional capabilities such as planning and memory,” he stated.
Hassabis proposed the creation of an organization inspired by the IPCC made up of researchers, scientists and experts in artificial intelligence. The objective would be to analyze, inform and recommend solutions to possible scenarios in which the human race is in danger.