The big artificial intelligence companies accepted a voluntary agreement to guarantee the safe development of this technology. Microsoft, OpenAI, Google, Amazon and Meta They are part of this commitment, promoted by the White House. It is the first major agreement in this sense that brings together the most important companies in the sector.
Within the framework of this agreement, the technology firms agreed to carry out external audits of your artificial intelligence tools, before their releases. These evaluations, which will be carried out by independent observers, will focus on aspects related to biosafety, cybersecurity and the impact on society.
The government “encourages this industry to maintain the highest standards, to ensure that innovation does not come at the expense of the rights and security of Americans,” the White House said in a statement. release. As this is a voluntary agreement, does not contemplate any type of penalty in case of non-compliance. However, the companies assured that they will assume these commitments immediately.
Since OpenAI’s ChatGPT appeared, the race has picked up. Also concerns about how these types of tools can drive disinformation and manipulation campaigns. A problem that becomes even more relevant in the run-up to the US presidential elections, scheduled for next year. This, among other dangers.
The agreement also includes anthropicfounded by former members of OpenAI, and Inflection. The companies also promise that they will make it easier to discover and report vulnerabilities in their systems, even after they are released.
Practical actions of the agreement on artificial intelligence
The agreement for a safe artificial intelligence stipulates the development of systems to alert the public when an image, video or text is created by artificial intelligence. what is usually called “watermark”. This is a measure that some companies, such as Google, have already been working on.
Concerns about biases and violation of privacy they were also collected in the pact. Companies will prioritize research on the social risks that artificial intelligence may pose in this regard.
“The record of artificial intelligence shows the insidiousness and prevalence of these dangers, and companies are committed to deploying artificial intelligence to mitigate them,” the White House document says.
The companies publicly celebrated the agreement. Anna Makanju, OpenAI’s vice president of global affairs, said the pledge contributes “specific and concrete practices” to the global debate on AI law. Brad Smith, president of Microsoft, said that they expect a “wide industry adoption and inclusion in global discussions”.
Outside the group of businessmen, the initiative was also greeted, but with a little more mistrust. “History would indicate that many technology companies do not actually honor a voluntary commitment to act responsibly,” Jim Steyer, director of consumer advocacy group Common Sense Media, said in a statement.
while the law comes
The White House explained that the initiative is “for now.” It is still pending to promote a bipartisan legislation in Congress to “help America lead the way in responsible innovation.” And while the talks are advancing in the G7, which brings together the democracies with the richest economies in the world, and the organization of a summit led by Great Britain before the end of the year.
Outside the United States, the largest attempt at regulation is in the hands of the European Union. The European Parliament approved in June a draft of Artificial Intelligence Law and began negotiations with the European Commission and member countries to define the legislation to be implemented.
Another voluntary agreement was also proposed last May by Alphabet, Google’s parent company, and the European Commission. Thierry Breton, commissioner of the European internal market, and Sundar Pichai, CEO of Google, then announced that they would work for an agreement that includes European companies and companies from other parts of the world.