The European Union will adopt this Friday (25) a regulation to prevent abuse by large internet companies, forcing platforms such as Google, Facebook, X (former Twitter) or TikTok, to improve their tools to fight against content. illicit, under penalty of severe sanctions.
At the same time, the bloc is also discussing a regulation on artificial intelligence. The EU even had a text that would visit control, or even ban, technology that offered “high risk” to humanity. But the discussion changed with the explosion of ChatGPT and its ilk.
Find out how this debate is currently.
Why is regulation of artificial intelligence necessary?
Regulators around the world have encountered a number of issues to be concerned about as artificial intelligence advances.
- Should they intervene in algorithms that can skew or distort decisions that affect the everyday lives of billions of people?
- What about the risk of chatbots such as ChatGPT fueling the production of misinformation online or leading to the misuse of large amounts of personal data?
- And what should they do about warnings that computers could soon reach a level of intelligence beyond their creators’ control—with potentially dire consequences for humanity?
Technology is advancing so rapidly – and the potential risks are, in some cases, so poorly understood – that there is still little consensus on a regulatory agenda.
What AI issues are regulators looking at first?
The European Union was well on its way to finalizing groundbreaking legislation that would control, or even outlaw, supposedly “high risk” AI systems – like those used to make decisions on job applications or loans, or health care. Then the ChatGPT craze exploded – the huge public interest in the generative AI chatbot made available for free by OpenAI.
Lawmakers quickly tweaked their plans to include so-called baseline models, such as the large language model that underlies ChatGPT.
EU rules would oblige companies to disclose what data those models were trained on and could hold them accountable for misuse of the technology, even when they don’t control the end applications.
However, Patrick Van Eecke, co-chair of the global cybersecurity, data and privacy practice at law firm Cooley, believes Brussels has moved too quickly in trying to regulate a technology that is still “a moving target”.
“We like to regulate reality before it even becomes reality,” he says, echoing a view widely shared in the AI world.
However, many US tech executives have a different explanation. They see this as a deliberate protectionist move on the part of the EU, imposing limitations on a group of mostly American companies that dominate the AI industry.
Will EU AI regulation become a model for the rest of the world?
That’s what happened to the bloc’s data protection legislation, and it’s a potential development that has US tech companies worried. Supporters of the EU Act say it will be applied flexibly to reflect changing standards and technological advances.
But critics say experience shows that Brussels takes a more pragmatic approach – and that the rules now set may limit the evolution of technology.
Some European companies agree. In a letter to the European Commission in June, 150 major European companies warned that the law could harm the bloc’s economy by preventing companies there from freely using important AI technology.
A final version of the law needs to be agreed in a trilateral negotiation between Parliament, the Commission and the Council of Member States – which means that there is still a chance for changes.
Aren’t AI companies asking for regulation?
The AI industry has learned from the backlash to social media that it doesn’t pay to avoid regulating technologies that can have significant social and political impact.
But that doesn’t mean they like what’s planned by the EU. Sam Altman, head of OpenAI and an ardent supporter of AI regulation, told the Financial Times that his company may have to withdraw completely from the EU if the final rules on AI are too strict. The controversy his words provoked quickly led him to back down, but behind the scenes, US concerns remain undiminished.
The willingness of big tech companies to ask for regulation has also raised suspicions that they see it as a way to consolidate their dominance in the AI market. Higher costs and bureaucracy can make it difficult for new competitors to enter.
What is the alternative to the European Union approach?
Before deciding on new laws, many countries are taking a closer look at how their existing regulations apply to apps that are powered by AI.
In the United States, for example, the Federal Trade Commission opened an investigation into ChatGPT, using its existing powers. One of their concerns is that ChatGPT is collecting personal data and sometimes using it to reproduce false and harmful information about ordinary people.
Unlike their European counterparts, US lawmakers have also embarked on a broad overhaul of AI that explicitly seeks to balance the technology’s benefits against its potential harms. US Senate Majority Leader Chuck Schumer has called for a series of meetings and forums with experts for the top Senate committees to help them decide which aspects of AI may need regulation.
Holly Fechner, co-chair of the technology industry group at law firm Covington & Burling, said, “Significant bipartisanship in Congress on US competition with China” makes Schumer’s approach “a winning message – and signals that the US is going in a different direction than Europe”.
If governments don’t regulate now, won’t the AI race become a dangerous free-for-all?
Many tech companies argue that AI development should mirror the early days of the internet: regulators held back then, allowing innovation to flourish, and only intervened later when needed.
There are already signs that new industry standards and agreements on best practices in AI are beginning to be adopted, even without explicit regulation. In the United States, for example, industry has been working with the National Institute of Standards and Technology to codify the best ways to design, train and deploy AI systems.
There have also been pledges to release more information about how the big models work and what data was used to train them – though it’s not clear how willing companies will be to do this. Meanwhile, there is no shortage of proposals for new verification systems to check whether online content was produced by a generative AI system or by a person.
Lack of progress on these and other fronts would considerably accelerate demands for regulation.
Some people who develop AI say it could destroy humanity. Is this not grounds for immediate regulation?
Nobody in the tech industry believes that current AI systems pose an existential threat to humanity and there is no agreement on when – if ever – technology could reach that point. However, earlier this year, an open letter signed by many technologists called for a six-month moratorium on work on the most advanced systems, to allow time to develop new security protocols.
While governments have begun to consider this issue, new international agreements would be needed to try to control the spread of dangerous AI. Even so, such efforts may be impractical given the wide availability of computational resources and datasets needed to train AI systems.
For now, the same companies that are leading the charge in AI claim to also be at the forefront of efforts to control it. OpenAI said in July that it was creating an internal team to start researching ways to control “super-intelligent” computers, which it believes could emerge in this decade. His proposed solution: a new AI system to control AI.