A month ago scientists, engineers and businessmen called for a moratorium on the deployment of AI-based tools such as OpenAI’s ChatGPT. Two weeks ago, the Italian data protection authority blocked access to its territory. Now it is the Spanish, German, Irish and Dutch authorities, among others, who are asking the Californian company for explanations.
Sam Altman, chief executive of OpenAI, says on the one hand that they follow all privacy laws but on the other hand he refuses to make public any information about the data they have used to train the latest version of ChatGPT. “May your left hand not know what your right hand is doing.”
AI must not only be efficient, but also beneficial to society
The European institutions are aware of the risks for people and for democracy that technological decisions with a profound social impact are made by Californian companies. The ghost of social networks still hangs over Europe –institutions and concept–. The return of fascism, denial and even flat earth are the tip of his iceberg. Cambridge Analytica, Brexit, the assault on the Capitol and the assault on Brasilia are still too present. All of these cases are examples of the misalignment between the goals of social media AI algorithms and those of society. Despite not being very intelligent, they do have the potential to affect the behavior of billions of people.
As the main objective of these algorithms is that we spend as much time as possible, we might think that a good way to achieve this is to give users what we like. Well no. The more extreme a user’s thinking is, the more predictable is her behavior and therefore the easier to know where she will click. Users with nuanced behaviors, critical spirits, or iconoclasts are too unpredictable. Therefore, the content that the algorithms recommend to us is not the one that we might like the most, but the one that helps to polarize us the most. Advertisers pay for that, democracy too.
That is why the Commission and the European Parliament are debating this week a new regulatory framework to face the challenges that AI poses. The project is as ambitious as it is necessary. To its credit, Europe has the GDPR, which ChatGPT could be violating, and data roaming suppression. Now it is about making AI regulation a new success story; We are not talking about penalizing your research, but about making sure that your objectives are aligned with ours.
And although it monopolizes the debate, this is not about ChatGPT, it is about how in November OpenAI turned the company into the largest AI laboratory in the world; is about how our public data serves private interests. It already happened with social networks and the experiment has not ended entirely well.