leave one AI at their free will can be very dangerous. In fact, years ago an expert in the ethics of artificial intelligence gave an example in which an algorithm designed to manufacture the largest possible number of clips could destroy humanity. Even so, in China They have wanted to do the experiment of putting an AI in command of one of their Earth observation satellites. And what they have found, if it is what they suspect, is terrifying.
Initially, they thought that artificial intelligence turned its attention to random points on the planetwithout any specific interest. However, when studying the chosen areas, they found that they all had a history of conflicts with China. Therefore, it could be that the AI was looking for a matchup.
It may also be all coincidence, but there are reasons to fear that this is not the case. Consequently, this experiment, like the theoretical example of that ethicist, highlights the importance of give concrete objectives to artificial intelligence. If they are left to act without limits, it seems that human beings are not among their priorities.
Artificial intelligence at the controls of a satellite
This experiment has been described in an article by South China Morning Post. The researchers tell how, despite going against the mission they had underway, they decided to leave an artificial intelligence in command of one of their satellites for 24 hours. He was not given any kind of instructions or objectives. She simply left it free to him, to see where he would focus the attention of the Qimingxing satellite 1.
Initially, the chosen targets caught his attention. He first she focused on Patna, a large city in India, located along the Ganges River. He focused his observation there for a long time, and then, after a new scan, he went to the osaka japanese port.
They could be random places. However, a brief look at history is enough to see that, possibly, they are not.
AI war goals
For decades, China and India have had a border dispute over the Galwan Valley, located next to Tibet. although this was declared as part of India when it belonged to the United Kingdom, China has not been satisfied with it.
The disagreement sparked a long-standing conflict, though it didn’t spark until 2020, when the first soldiers died on the border. At the very beginning of the COVID-19 pandemic, India denounced the death of 20 of its soldiers in the Kashmir region, at the hands of the Chinese army.
Patna is not in the heart of the Galwan Valley, but it is in the northeast of the country, near Tibet. Furthermore, one of the deceased soldiers, sunil kumarHe came from this city. Therefore, a search for information by the AI could have selected it as a war target.
As for Osaka, its port is known to occasionally host ships from the United States Navy operating in the Pacific.
All this indicates that, perhaps, the choice of these two points was not random by artificial intelligence. It could be that I was selecting war targets based on historical information from your country. Since no mission was given to it, the algorithm was able to focus only on its owner: the Chinese government. The ethicist already alerted him. The AI does not know how to defend humanity, it only understands who it works for and that is the only thing it will focus on. Therefore, if we are not looking for a conflict, we must make it very clear to him what his job is. Artificial intelligence can be very useful, but as long as nothing is left to chance.