By Julio Vega, General Director of the MX Internet Association
Trying to ban or stop AI research is like trying to stop a stampede with STOP signs. The reappearance and prompt proliferation of this technology throughout the world gave the signal of a new start to a race that was not only competitive, but frankly arms-oriented, where no one could afford to slow down. One of the main risks with the flourishing of AI is being left behind. It is necessary to start from there.
Artificial Intelligence (AI) applications unexpectedly burst into the global business landscape, accelerating political and legislative processes seeking a position from governments and laws regarding this technology. ChatGPT, Bard, Copy.ai and YouChat among many others are just one aspect of the full potential of AI in the future., but its closeness and ease to be used by the entire public brought the concept back to prime time. Even the least interested in information technology could already be a user of AI applications (even without knowing it).
The new urgency led to the conclusion of regulatory efforts that have been investigating the issue for several years with new waves of interested parties, not a few of them clearly on the prohibitionist side. To top it all off, on March 29 billionaire Elon Musk gathered 100 signatories to back a public petition to halt AI research for six months. “The most powerful AI systems should be developed only after we are sure that their effects will be positive and their risks are manageable.”, is the central idea of the declaration.
It sounds good, but a more realistic position is held by the European Union, which has been seeking to legislate in this regard for several years, and which in December 2022 published an Artificial Intelligence Law. This seeks to “create a safe space for innovation in AI, which meets a high level of protection of the public interest, security and fundamental rights and freedoms.”
The European law establishes four levels of risk in the uses of AI, so that the minimum risk hardly has obligations before the law (although a Code of Conduct is recommended), the limited risk – which includes chatbots, emotion recognition, deep fakes, biometric categorization and impersonation—is obliged to be totally transparent, to the point of having to open its algorithms to the authorities. There are still two more levels, where the last one is considered “unacceptable risk” —social rating, mass surveillance, behavioral manipulation— and is prohibited.
But where Mexican legislation should start is with the promotion. It is strategic for the country to have a specific artificial intelligence policy that is incorporated into the national digital policywhere scientific and technological institutions of a public or private nature can investigate freely and more people can be trained in these technologies.
Of course, it is necessary to have a regulatory framework that protects human rights, guarantees non-discrimination and inclusiveness in any of its forms and also guarantees the protection of personal data and the right to privacy, among many other things, but the worst The risk for the country is to be left behind, it is to repeat the history of consumption and not of creation.
Cyber security will be one of the hottest topics related to artificial intelligence in the future. Just imagine a malware intelligent, capable of changing and adapting to cybersecurity measures at great speed. What would it be like to go from updating your antivirus every three months to having to do it every five or 10 minutes? The Mexican State must be prepared to run in this race, because if it does not, it will once again depend on solutions developed abroad, without the capacity to evaluate a specific solution.
Likewise, one must be prepared to evaluate and counter campaigns of fake news or of deep fake that will spread with even more effectiveness as the days go by. States in general have not decided to directly confront the manipulation of public opinion through the use of bots on social networksand now they will have to face the creation of perhaps millions of fake profiles, fueled by an AI better able to feign humanity.
It is in each society the right to decide if it prohibits the development of weapons of war with AI, where a drone can make life or death decisions. Likewise, you can decide to streamline your judicial system through algorithms that provide solutions to the most frequent cases, for the benefit of more expeditious justice.
But for this it is necessary to have full access to this knowledge, which is why it is in the hands of the three powers of the Union approach those who are generating AI developmentsas well as those who are already studying its effects in all areas of society.
Editor’s Note: This text belongs to our Opinion section and reflects only the author’s vision, not necessarily the High Level point of view.
MORE NEWS:
High level Team of young journalists whose objective is to explain the most relevant business, economic and financial news. We are passionate about telling stories and believe in citizen and service journalism.