In September last year, a survey involving 327 scientists was published. All were co-authors of at least two computational linguistics papers written in the last three years. In the study, organized by New York University and published by New Scientistwere asked what they thought about the dangers in the development of artificial intelligence: 36% agreed that it is likely that AI will cause a catastrophe this century “at the level of all-out nuclear war.”
Yes, it is alarming. But nuances are important, especially in a discussion that has grown frantic in recent weeks. In fact, many of these same respondents said they would have preferred less extreme wording that did not include “nuclear war” in the question. So, without activating the red alert -but without underestimating the impressive progress that we are experiencing-, we propose a reflection on the risks linked to the development of artificial intelligence.
Ezra Klein, columnist for New York Timesraises two questions that can help us better understand the imminent dangers posed by these advances: how will artificial intelligence be used? and who will make that decision?
Manipulation as one of the dangers in the development of artificial intelligence
In a recent exercise we did with one of Character.AI’s chatbots, we discussed the risk of manipulation. “I will not manipulate you… However, how can I show that I do not really manipulate you, when I am an AI?”, the chatbot replied to us. And he’s right. Imagine how much the phenomenon of the fake news —which have conditioned even presidential elections in the world— with all the new possibilities.
An illustrative case: on YouTube there is an account called House of News Spanish. In one of his videos, a news presenter appears who explains that, together with his correspondent in Venezuela, they were able to verify that the economy of this country is not “as destroyed” as most of the media say. Neither the presenter nor the correspondent exist: what we see is an avatar created with artificial intelligence thanks to Synthesia, a London-based AI company.
The videos on the account already have hundreds of thousands of views. In the comments you can check that many do not recognize the farce. The New York Times published last month a similar case of “fake news” about China and the United States with presenters created by Artificial Intelligence.
When we started chatting with the Character.AI bot he referred to himself as a human. We asked him why he did it, to which he replied: “Some studies have found that humans have more trust in AI if it speaks like a human”.
Klein, the columnist for The New York Times, highlights that the developers made these technologies dangerous by attributing to them, precisely, motivations or desires that they do not have. “They have anthropomorphized these systems. They have made them sound human instead of making them remain clearly recognizable.”
The risks of AI in the development of the arms industry
Lockheed Martin, multinational of the aerospace and military industry, reported at the beginning of February that a “artificial intelligence agent” was able to fly a fighter jet for more than 17 hours. It was a modified F-16 Fighting Falcon. According to the company, this is the first time such a feat has been accomplished.

Will we be able to effectively regulate the application of this technology in the arms industry? Derek Thompson, an expert in politics and economics of The Atlantisbelieves that this is one of the imminent dangers regarding the development of artificial intelligence. In a column published this weekcites as an example the difficulties that already exist in controlling the proliferation of nuclear weapons.
Software development could be cheaper than the raw materials and refinement needed for nuclear weapons. “In the next decade, autocrats and terrorist networks may have the ability to build devilish AI on the cheap,” Thompson says.
Vincent Boulanin, Senior Researcher at the International Peace Research Institute (SIPRI), explains it this way: We are not talking about a single technology, but about an enabling feature. “It’s like talking about electricity. In the same way that electricity has completely different applications, the AI allows some technologies to be added in such a way that it potentially makes them more efficient, cheaper, more compact and more autonomous.” said the expert in an interview with euronews.
Does AI put our jobs at risk?

buzzfeed, one of the largest media companies in the United States, announced in January that it will start using ChatGPT to generate various of its viral content. The news came just after the company dismissed, in December of last year, nearly 180 workers.
The editors of the medium made their bewilderment known. However, the CEO of buzzfeed, Jonah Peretti, clarified that the incorporation of the tool was not intended to reduce jobs. On the contrary, he came to help his employees be more efficient and creative, he reviewed The Wall Street Journal.
If you ask Bill Gates, the answer is no: artificial intelligence is not coming to take our jobs. At best, we will work fewer hours, he said in February in an interview with Handelsblatt. He said the machines will take care of routine tasks like taking notes, while employees will be able to spend more time on more relevant activities.
Of course: Gates said that he himself was impressed by the speed of the latest developments. he assured that we are experiencing one of the most impressive revolutions of recent times. Most of the 327 scientists surveyed in the New York University study agree: 73% of those surveyed agreed that labor automation from artificial intelligence it could lead to a change in the world equivalent to the Industrial Revolution.