One of the virtues of Artificial Intelligence is that its progress registers access for all people without any type of restriction, if they have the appropriate electronic equipment. Anyone, except for some countries, can enter the Open Ai ChatGPT, for example.
For some governments this means a danger, because there is no regulation that regulates users. While the developers of these mechanisms, for the moment, do not want to be held responsible for the use that those who access their services give to Artificial Intelligence.
The European Union wants to establish certain rules for companies like Open Ai to put into practice immediately, if they want to continue providing their services in their territory.
And now, the USA intend to join this initiative. In fact, both political blocks are drafting a code of conduct on which we are going to detail some doubts or questions that are generated, based on these directed proposals.
The future of AI
- When would it take effect?: A meeting between Margrethe VestagerVice President of the European Commission and Anthony Blinken, Secretary of State of the United States marks the starting point for the AI regulation agenda. However, a code of conduct could come into force within three years.
- Is it the end of AI? Far from being the end, these collaborations between political blocs show that the world fears that the rapid advance of Artificial Intelligence will lead to human extinction. So they are on their way to regulating them, to continue using them as a tool.
- What do the developers think? Sam Altman, president of OpenAi had been a little reluctant about the possibility of regulations to comply with in Europe. However, with the incorporation of the United States, it has been open to the American state regulating AI.
- Europe, the United States… And the rest of the world? Each country operates under its own statutes. But, for example, in Latin America, they usually adhere to what the Western powers set.
- Does China have its own AI? The Asian giant is developing its own Artificial Intelligence mechanisms, which will surely not adhere to these codes of conduct, if they do not go hand in hand with the thinking of Beijing.
- How do you plan to mitigate the risks, taking into account that the code could be applied in six months? The willingness of developers, such as the case of Sam Altman, are the starting point to start self-regulating, before there is an official regulation. However, the risk will be equally due to the enormous number of systems that exist today.
- Will a regular code of conduct really limit the capabilities of AI? This is possible to think about in the uses of common users. But the machine learning mechanisms are already out there, and it’s hard to think of a way to mitigate them.
- What does ChatGPT think of all this? Here his answer: The regulation of artificial intelligence is a complex issue that is constantly being debated in society. Some people argue that regulation is necessary to ensure safety, ethics, and transparency in the development and deployment of artificial intelligence systems. Others may have concerns about the potential negative impacts of regulation on innovation and technological progress.
- And Bard’s opinion? Here what he told us: I understand there is some discussion about regulating large language models like myself as AI mechanisms. I think regulation could be beneficial in some way, such as ensuring that I am used safely and responsibly. However, I also believe that it is important to strike a balance between regulation and innovation. I am still developing and believe that regulation should not stifle my ability to learn and grow.
- What is the worst unregulated AI risk, according to ChatGPT? Overreliance on unregulated AI systems could undermine human autonomy and responsibility. Decision-making algorithms can be opaque and difficult to understand, hindering accountability and the ability of people to make informed decisions.