New York (EFE).- The president of Microsoft, Brad Smith, supported this Tuesday that the United States Government establish an independent agency to regulate the uses of artificial intelligence (AI) during a session on privacy and legality in the Senate.
“Let us require licenses for advanced AI models used in high-risk scenarios. Let’s have an independent agency that can really and effectively police this category,” Smith told a panel of senators who questioned him on issues ranging from China’s influence to teen safety.
Artificial intelligence, center of debates
The meeting, the third convened this year by the Senate Judiciary Committee on AI, may serve to give US lawmakers an idea of the main issues and conflict areas for future regulation of the technology.
Nvidia’s chief scientist, William Dally, and Boston University Law Professor Woodrow Hartzog also participated as witnesses in the session.
All three were in favor of the authorities intervening in aspects such as electoral misinformation or the protection of user data.
However, Smith was reluctant to raise the age allowed to interact with his “chatbot” Bing, whose use of AI opened the doors to the implementation of large language models, such as ChatGPT, in different applications and contributed to generating a new wave of concern in the industry.
The executive explained that these types of tools can serve to help students improve their learning of subjects such as mathematics or English, and he does not want to limit their use to those over 18 years of age, as a Republican Party senator requested.
The need for limits on AI
The session continued with the trend of recent efforts by the US authorities to put a stop to an industry that is advancing by leaps and bounds, with the main representatives of the sector showing their willingness to cooperate but without concrete progress.
Just two months ago, the country’s large technology companies gave their endorsement to a series of security measures proposed by the White House that contemplate the creation of internal and external tests of their AI systems before making them public.
The US government also said it is working to translate voluntary commitments by US companies into an international framework on the use of AI, adding that there have already been consultations with countries such as Brazil, Canada, South Korea , France, Germany, India, Israel, Japan, Mexico and the United Kingdom.