Artificial Intelligence (AI) is a tool that can benefit humanity, but also put it at risk. The debate is back on the table after the statements of two protagonists in the development of AI.
Geoffrey Hinton, a computer scientist and AI researcher, has said that he believes AI is “the greatest existential threat facing humanity.” He has warned that AI could “outperform us at everything” and that we could “end up being pets or slaves to AI”.
For his part, Sam Altman, executive director of OpenAI, has said that he believes that AI is “the most important technology of our time”. He has warned that AI could “be used for good or ill” and that it is “critical that we get it right.” The warnings from the experts and the opinions of Hinton and Altman are a sobering reminder of the potential dangers of AI. While AI has the potential to greatly benefit humanity, it also has the potential to destroy us.
Previous Warnings: The 2015 Letter
A group of experts has warned that artificial intelligence (AI) could pose an existential threat to humanity in 2015. Experts, which include physicists, computer scientists and philosophers, said that AI could become so powerful that it could easily overwhelm and destroy humans. humans.
The 2015 letter included the signatures of Elon Musk, Stephen Hawking, and even Bill Gates. The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter,” sets out detailed research priorities in a twelve-page accompanying document.
In it they warned that AI could “represent an existential risk to humanity.” The letter called for more research into the safety and ethics of AI and urged governments to take action to regulate AI development – something that is only being addressed, 8 years later.