He is known as “the godfather” of Artificial Intelligence (AI). Geoffrey Hinton invented in 2012 the technology that served as the basis for the systems that have revolutionized the world in recent months. He is a Turing Award winner and until recently worked on AI development at Google. He resigned, he says, so he could notice it without a problem: “They shouldn’t go any further until they know if they can control it.”
Hinton’s warning is not just about Google, which he says has acted in a “responsible manner.” It’s a wake-up call to the entire industry, which has been on a meteoric run of AI system releases since ChatGPT’s release in November of last year. “It’s hard to see how you can stop bad actors from using it for bad things”he said in an interview with The New York Times.
Hinton, 75, notified Google of his resignation in April and spoke directly to CEO Sundar Pichai last Thursday, the report said. Times. She worked for more than a decade at the company, during which time she became one of the most respected voices in the industry. “There’s a part of me that regrets my life’s work,” acknowledges the scientist.
The former Google worker’s immediate concerns have to do with two things: the risk that AI poses to jobs and its influence on disinformation campaigns. “It takes away the heavy work, but it can end up taking everything away from you”, warns Hinton.
A key player in the development of AI
Hinton adopted the concept of “neural network” when I was a graduate student at the University of Edinburgh, Scotland, in 1972. It is a mathematical system that emulates the way the human brain processes information. Almost his entire career was devoted to its development.
Hinton settled in Canada in the 1980s. In 2012, along with two of his graduate students in Toronto, built a neural network that could analyze thousands of photos and learn to identify objects common, such as dogs and cars. So Google paid $44 million to buy the company that Hinton and his two students founded.
One of these two students was Ilya Sutskever, who soon became OpenAI’s chief scientist, the one created by ChatGPT. In 2018, Hinton received the Turing Award, the equivalent of the “Nobel in computing,” for his work on neural networks.
When Google and OpenAI first started developing their neural networks, Hinton thought they were a great mechanism for machines to understand and generate language. But he grew concerned last year, when companies began building systems on much larger amounts of data. He realized that in some ways these AIs were outpacing human intelligence.
“Very few thought that these things could actually become smarter than people,” says Hinton. “Most of us, including myself, believed that this was still a long way off. I supposed that there were 30 or 50 more years to go. Obviously, I don’t think the same anymore.”
Former Google worker adds to the alarm about AI
Hinton does believe that future versions of AI may become a threat to humanity. Due to the large amount of data that they analyze, many times these systems learn unexpected behaviors. Hinton even fears that one day truly autonomous weapons will become a reality.
This has already been recognized by Google CEO himself, Sundar Pichai, who said last month that there are several functions of Bard, the company’s AI chatbot, which they still don’t understand. He spoke that his model has experienced several unexpected emerging abilities in recent tests, such as reasoning, planning or creativity.
Various representatives of the scientific community have already drawn attention to the risks related to AI. More than a thousand specialists asked large companies in March to stop the development of AI models, until it is known with certainty “that its effects will be positive and its risks will be manageable”. They did so through another open letter, signed by several industry executives, including Elon Musk, owner of Twitter and co-founder of OpenAI.
Other ex-Google workers have also taken the same line as Hinton. Two analysts for Google’s AI products, for example, tried to stop the launch of Bard. The technicians left their concern in writing, in a report on the risk that the chatbot developed by the company will generate false or dangerous content.