artificial intelligence (AI) could pose a “more urgent” threat to humanity than climate changeassured the pioneer of this technology, Geoffrey Hinton.
In an interview with ReutersHinton, widely known as one of the “godfathers of AI,” recently announced that he had left Google parent Alphabet after a decade with the company, saying he wanted to talk about the risks of the technology without affecting to your former employer.
Hinton’s work is considered essential to the development of contemporary AI systems. In 1986, he co-authored the paper “Representation Learning by Backpropagation Errors,” a milestone in the development of the neural networks that underpin AI technology.
In 2018, he received the Turing Award in recognition of his research advancements. But now he is among a growing number of tech leaders publicly expressing concern about the potential threat posed by AI if machines were to achieve greater intelligence than humans and take over the planet.
Hinton’s concern
“I would not want to devalue climate change. I wouldn’t want to say, ‘You shouldn’t be worrying about climate change.’ That is also a big risk. But I think this could end up being more urgent.Hinton said of the AI advances.
He added: “With climate change, it’s all too easy to recommend what you should do: just stop burning carbon. If you do that, eventually things will be fine. For this, it is not at all clear what you should do”.
Microsoft-backed OpenAI kicked off a technology race in November when it made the AI-powered chatbot ChatGPT available to the public. It soon became the fastest growing app in history, reaching 100 million monthly users in two months.
In April, Twitter CEO Elon Musk joined thousands of others in signing an open letter calling for a six-month pause on development of systems more powerful than OpenAI’s recently released GPT-4. Signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and fellow AI pioneers Yoshua Bengio and Stuart Russell.
While Hinton shares the signatories’ concern that AI could prove to be an existential threat to humanity, he did not agree with halting the research: “I’m in the camp that thinks this is an existential riskand it’s close enough that we have to work very hard right now and put a lot of resources into figuring out what we can do about it.”.