- Major AI companies engage with the White House to ensure the security of their tools before they are released.
- The initiative includes Google, Microsoft, OpenAI, Amazon, Anthropic, Inflection, and Meta.
- The executives agreed to measures such as allowing consumers to identify AI-generated content and collaborating with independent experts to assess the security of their systems.
The leading artificial intelligence companies, including Google, Microsoft and OpenAI, came together at a meeting at the White House on Friday to make voluntary commitments to ensure the safety of artificial intelligence (AI) tools. before their public releases.
Amazon, Anthropic, Inflection and Meta They also joined this initiative.
Each of these seven companies agreed to a series of measures, including:
- allow consumers to identify content generated by AI,
- engage independent experts to assess the safety of tools,
- share information with other industry players and governments
- and allow third parties to find and report vulnerabilities in your systems.
Security in the field of artificial intelligence has gained relevance after the launch of ChatGPT by OpenAI last year, the generative AI tool capable of responding with creative and conversational responses.
Concern about the risks this technology can bring has led industry leaders to express public fears about the accelerated advancement of AI.
AI: safe and reliable?
In an effort to ensure the safe development of AI without restricting innovation, US President Joe Biden is seeking to establish rules and standards around this technology.
Although Congress is considering the implementation of regulations, their implementation could take months or even years due to the need for a full understanding of how the technology works and its potential risks, he says. CNBC.
Thus, executives from Amazon Web Services, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI met at the White House this Friday, July 21, and made these commitments.
Additionally, the White House is negotiating with other countries to secure international standards related to AI regulation.
The vice president of the United States, Kamala Harris, had already held previous meetings with CEOs of AI companies and experts in civil and labor liberties to assess the challenges involved in the responsible use of this technology.
With these commitments assumed by the main artificial intelligence companies, it is expected to move towards a safe and ethical development of AI, mitigating risks and guaranteeing a more reliable technological future. Could that happen?
Microsoft will charge $30 a month for AI features
42% of companies in LATAM already use AI in their processes
AI expert: Artificial intelligence will be the biggest bubble of all time