- The EU Artificial Intelligence Law is about to be approved and classifies AI applications into four risk levels.
- The approval of the European law marks a milestone in the regulation of artificial intelligence, which is advancing faster than the regulations on the matter.
- In China, new laws prohibit the use of generative AI technologies to produce fake news, and a man has been arrested for doing so with ChatGPT.
The Artificial Intelligence Law of the European Union is one step away from being approved after a committee of legislators in Parliament voted for its convenience.
Its about first set of western standards to regulate the operation and limits of generative artificial intelligence programs, such as ChatGPT, Bard and others.
The approval marks a milestone in the race between the different governments to control the advance of artificial intelligence, which evolves much faster than the regulations on the matter.
The rule, known as the European Artificial Intelligence Law, is the first for AI systems in the West. It should be remembered that China has already developed regulations regarding the control and administration of companies that carry out generative AI products, such as ChatGPT.
The rules advancing in the European Parliament, among other things, specify the requirements that providers of so-called “basic models” such as ChatGPT and Bard must meet, which have become a concern for regulatory bodies due to how advanced they are. They are becoming.
This, in addition to the fear that they will begin to have a massive impact on the hiring of qualified workers.
What does the artificial intelligence law in Europe say?
The European Union Artificial Intelligence Law, which still needs to be approved by the plenary assembly, classifies AI applications into 4 risk levels: unacceptable risk, high risk, limited risk, and minimal or no risk.
Applications called “unacceptable risk” will be absolutely prohibited by default and will not be able to be implemented in the European bloc.
Included in that group are AI systems that use subliminal techniques or manipulative or deceptive techniques to distort behavior. Also AI systems that exploit the vulnerability of specific individuals or groups. In this prohibited group, there will also be biometric categorization systems based on sensitive attributes or characteristics and those used for social qualification or reliability evaluation.
Yes ok some legislators had demanded that the measures be tougher with the idea of ensuring that they cover systems like ChatGPT’s, but this was not the case.
China and the control of ChatGPT
The advance in the West follows in the footsteps of China, where new laws related to deepfakes prohibit service providers and users from using these technologies to produce, edit, disseminate and share false information on networks.
The rules, which have been in effect since January 2023, were specially designed to stop the use of generative artificial intelligence technologies that aim to alter online content.
Along these lines, last week, as we reported in Merca2.0, a man who lives in the Chinese province of Gansu was arrested. He was the first person in history to be arrested in that country for allegedly generating false news with ChatGPT.
The suspect, surnamed Hong, wrote and edited news generated by ChatGPT and posted it on the Baidu-owned platform.
Now read:
Pearson and Chegg have a serious threat called ChatGPT
He plans his vacations with ChatGPT and spent 19 thousand pesos to get to know Europe
Shares of this company plummet after admitting to the threat of ChatGPT