- Governments are making progress in regulating artificial intelligence, which could affect companies, brands and people who use it.
- Europe is leading the effort with the passage of an AI Act imposing limits on its use, while the United States is also considering regulatory measures, although progress has been slower.
- AI industry leaders are urging the passage of laws and the creation of agencies to oversee AI, but there are divergences.
All over the world, governments are advancing plans to regulate artificial intelligence (AI)a technology that in recent months (yes, months!) has experienced something that could be called a “quantum leap” in terms of sophistication and, above all, social impact.
Although still in its early stages, such forward-looking rules could affect not only the powerful tech companies that build and market artificial intelligence in their products and services, but also the brands and people who use it.
Europe has taken the biggest strides in efforts to establish new AI regulations.
On Wednesday June 14, the European Union passed a draft of its AI Law, a rule intended to impose limits on the use of AI, particularly facial recognition software, and to enforce transparency of AI companies. .
A final version of the law is expected to be approved later this year. Although you have to take into account that it is so much time until the end of 2023 in terms of advances in AI, that it may already be too late.
Progress has been a little slower in the USA (imagine how slow it will be in Latin America). Still, both the public and private sectors in the country led by Joe Biden are beginning to embrace the idea of regulating AI.
In October 2022, shortly before the launch of ChatGPT, the text-generating AI model that has been largely responsible for sparking the current global conversation about the need for AI safeguards, the White House published its “Blueprint for a AI Bill of Rights”.
It is a document drafted in an effort to “guide the design, use, and implementation of automated systems to protect the American public in the age of artificial intelligence.”
If you are a marketing student, you should know that you have not advanced too far.
What marketing students need to know about AI regulation
Some AI industry leaders have begun openly urging governments to pass new laws to impose barriers to AI.
Last month, for example, the CEO of OpenAI (the Microsoft-backed company behind ChatGPT), Sam Altman, testified before Congress, telling lawmakers that “if [la inteligencia artificial] goes wrong, it can go badly wrong”, and that his company was committed “to working with the government to prevent that from happening”.
In May, Altman and a cohort of other AI leaders signed an open letter stating that AI poses an “existential risk to humanity” and that mitigating it “should be a global priority alongside other societal-scale risks, such as pandemics.” and nuclear problems and wars”. We publish it in Merca2.0.
In an opinion piece published in The New York Times, author and historian Yuval Noah Harari argued that the current inability of global society to regulate or control the psychologically damaging and politically divisive impacts of AI spells bad news for the new wave of more advanced AI models like ChatGPT.
“Social networks were the first contact between AI and humanity, and humanity was lost”, wrote the author. “The first contact has given us the bitter taste of what is to come.”
US: Divergences on how to approach AI regulation
Altman (OpenAI) advocates for the formation of a new government agency dedicated specifically to oversight of AI.
University of Florida law and engineering professor Barbara EvansBy contrast, he says that the creation of new regulatory agencies “requires a level of consensus that is unlikely to be achievable in the current political environment.”
But “even if there was a consensus,” Evans argues, “forming a single AI oversight body is probably not a good idea.”
“Lawmakers and the public tend to talk about ‘AI’ as if it were a single, unified phenomenon,” he says. “In reality, ‘AI’ refers to thousands of computing tools that will be deployed in a wide variety of different environments, each of which pose different risks and offer different benefits to society.”
As a result, according to Evans, government regulation of AI “must be tailored to the precise setting where AI is deployed.”
Rather than a single AI-focused agency, Evans says it may be more effective “to have all current US federal agencies oversee the use of AI within the specific industries they already regulate; and, from there, develop clear instructions about who is doing what and how to share responsibilities between them.”
Now read:
Hisense reveals its revolutionary Laser 4K PL1 projector in Mexico, a new era in home entertainment
Bud Light donates money to LGBTQ+ companies to calm criticism
Cannes Lions 2023: 26,992 jobs will compete at the advertising festival