Amazon, Google, Meta, Microsoft and other companies leading the development of artificial intelligence technology have agreed to meet a set of AI protections brokered by US President Joe Biden’s administration.
In addition, the White House announced, Friday, that it had obtained voluntary commitments from seven American companies aimed at ensuring the safety of their artificial intelligence products before they were launched.
He added in a statement that the four tech giants, along with ChatGBT developer OpenAI and startups Anthropic and Inflection AI, have committed to security testing “partly conducted by independent experts” to protect against key risks such as biosecurity and cybersecurity.
The companies also committed to methods of reporting vulnerabilities in their systems and using digital watermarks to help distinguish between real and AI-generated images.
The White House said the companies would publicly report flaws and risks in their technologies, including impacts on fairness and bias.
The voluntary commitments are intended as an immediate way to address the risks ahead of a long-term effort to get Congress to pass laws regulating the technology.
In contrast, some advocates of AI regulations said that Biden’s move is just the beginning, but that more needs to be done to hold companies and their products accountable.
For his part, Senate Majority Leader Chuck Schumer, a Democrat from New York, said he would introduce legislation to regulate artificial intelligence. He has held a number of briefings with government officials to educate senators on an issue that has attracted bipartisan interest.
A number of tech executives have called for AI regulation, and several of them went to the White House in May to speak with Biden, Vice President Kamala Harris, and other officials.