Leading AI (artificial intelligence) companies, including OpenAI, Alphabet and Meta, have made voluntary commitments to the White House to implement measures such as watermarking technology-generated content to make it secure, the US government said.
The companies — which also include Anthropic, Inflection, Amazon.com and OpenAI partner Microsoft — pledged to thoroughly test the systems before releasing them and share information on how to reduce risk and invest in digital security.
The measure is seen as a victory for US President Joe Biden’s government effort to regulate the technology, which is experiencing a boom in investment and popularity.
Ever since generative AI, which uses data to create new content, like the human-sounding ChatGPT conversation, became wildly popular this year, lawmakers around the world have begun to consider how to mitigate the technology’s dangers to national security and the economy.
In June, US Senate President Chuck Schumer called for “comprehensive legislation” to move forward and ensure safeguards for artificial intelligence.
Congress is considering a bill that would require political ads to disclose whether AI tools were used to create images or other content.
Biden, who will host executives from the seven companies at the White House on Friday, is also working on developing an executive order and bipartisan legislation on AI technology.
As part of the effort, the seven companies have committed to developing a “watermark” system for all forms of content, from AI-generated text, images, audio and video, so users know when the technology has been used.
This watermark, incorporated into the content in a technical way, will presumably make it easier for users to identify deep-fake images or audio that may, for example, show an act of violence that did not occur or distort a politician’s photo to put him in an unfavorable position. It is not clear how the watermark will be evident when sharing the information.
Companies have also pledged to focus on protecting user privacy as AI develops and ensuring the technology is free of bias and not used to discriminate against vulnerable groups. Other commitments include developing AI solutions to scientific problems such as medical research and climate change mitigation.