The rise of Artificial Intelligence has to the world in suspense, to the expectation. The more it is used, the more opponents appear: for various reasons, there is a fear of being out of control. And tech figures like Sam Altman, Bill Gates and Elon Musk have taken a position on it.
While Altman and Gates are favourable, while acknowledging the risks, in the case of Elon Musk has always been concerned.
This does not prevent the South African from promoting his own Artificial Intelligence, through the xAI company. But he will do it under his parameters, that he assures are more responsible than those of OpenAI, Altman’s company supporting Bill Gates.
What do these figures think? about an Artificial Intelligence out of control? Let’s look at the positions of Sam Altman, Bill Gates and Elon Musk.
Elon Musk vs. Artificial Intelligence
Elon Musk has been an early opponent of technology. Already in 2014, the South African pointed out: “I think we must be very careful with Artificial Intelligence. With Artificial Intelligence, we are summoning the devil.”
In recent years, he has been concerned not only with the power given to technology, but also with the loss of response capacity on the part of humanity.
“There is a risk that advanced AI will eliminate or limit the growth of humanity. It is a double-edged sword (…) If you have a genius who can grant you everything who can also do anything, that necessarily presents a danger.” Musk points out.
“I don’t think AI is trying to destroy all of humanity, but it could put us under strict controls.” stresses.
Sam Altman calls for government controls
Sam Altman, founder of OpenAI, the company behind ChatGPT, claims that governments they must establish controls over the companies.
“My worst fears are that we…the tech industry will cause significant damage to the world. I think that could happen in many different ways,” he said in a recent presentation before the United States Senate.
“I think if this technology goes wrong, it can go badly wrong, and we want to talk about it,” Sam Altman stressed. “We want to work with the government to prevent that from happening. But we try to be very clear about what the negative case is and the work we have to do to mitigate it.”
“Given that we are going to face elections next year and these models are improving, I think this is a significant area of concern,” he closed. “I think that there are many policies that companies can voluntarily adopt.”
Bill Gates acknowledges the risks of Artificial Intelligence
Bill Gates is another of the Artificial Intelligence enthusiasts, but he does not stop understanding those who are afraid of it. He dealt with the risks in an article on his Gates Notes blog.
“AI is changing so fast that it’s not clear exactly what will happen next,” said the Microsoft founder. “We are faced with big questions posed by the way current technology works, the ways in which people will use it with bad intentions and the ways in which AI will change us as a society and as individuals”.
“The future of AI is not as bleak as some people think or as rosy as others think. The risks are real, but I am optimistic that they can be managed.” Consider Bill Gates.
One of the focuses is on deepfakes to manipulate. “Virtually anyone can create fake audio and video. If you get a voicemail that sounds like your child is saying ‘I’ve been kidnapped, send $1,000 to this bank account within the next 10 minutes and don’t call the police,’ It’s going to have a terrible emotional impact. beyond the effect of an email saying the same thing.”
“On a larger scale, deepfakes generated by AI could be used to try to tip an election”, aim. “Of course, you don’t need sophisticated technology to cast doubt on the rightful winner of an election, but AI will make it easier.”
Another of Bill Gates’ concerns is about an arms race with AI. “All governments want to have the most powerful technology to be able to deter attacks by their adversaries,” underlines. “This incentive to let no one get ahead could spark a race to create ever more dangerous cyberweapons. Everyone would be worse off.”
“Governments should consider creating a global body for AI similar to the International Atomic Energy Agency”, he highlights.