The Californian company OpenAI, creator of ChatGPT, announced the launch of tools aimed at combating disinformation before the dozens of elections planned this year in various countries where almost half of the world’s population lives.
The success of the ChatGPT application has popularized generative artificial intelligence (AI), but also raised concerns about its possible use in disinformation campaigns that would aim to influence voter voting.
As dozens of elections are planned this year, particularly in the United States, India and the United Kingdom, OpenAI on Monday underlined its desire not to allow the use of its technological tools, including ChatGPT and the election generator. DALL-E 3 images, for political purposes.
“We want to be sure that our technology will not be used in a way that undermines” the democratic process, OpenAI explains in a blog.
“We are still evaluating the potential effectiveness of our tools for individual persuasion. Until we know more, we will not allow people to build apps for political or activist campaigns,” the company adds.
Disinformation — the intentional dissemination of false information — and misinformation — the unintentional dissemination of false information — using AI could “undermine the legitimacy of newly elected governments,” the World Economic Forum warned last week. which runs until January 19 in Davos, Switzerland.
Fears of election disinformation date back several years, but the general public’s access to powerful AI-powered text and image generators greatly enhances the threat, experts say.
OpenAI said on Monday it was working on tools to reliably determine the provenance of text generated by ChatGPT and to give users the ability to detect whether an image was created using DALL-E 3.
“Early this year, we will implement the Coalition for Content Provenance and Authenticity [Coalition for Content Provenance and Authenticity’s Digital Credentials] », Explains the company.
This coalition, called C2PA, will bring together Microsoft, Sony, Adobe, Nikon and Canon.
According to OpenAI, ChatGPT, when asked about the American elections, will, for example, direct voters to recognized sites.
“The lessons learned from this work will be the basis of our approach for other countries and regions,” the company says, adding that DALL-E 3 contains “guardrails” to prevent users from generating images of real people , particularly candidates.
The American giants Google and Meta already announced initiatives last year intended to regulate the use of AI in political campaigns.