A group of artificial intelligence experts and technology industry executives have called for a six-month pause in training powerful artificial intelligence systems, arguing that it is a potential threat to humanity.
In an open letter, they assured that the laboratories that work with this technology are in “a race out of control to develop and implement increasingly powerful digital minds that no one, not even their creators, can reliably understand, predict, or control.”
The statement was signed by more than 1,000 people, including businessman Elon Musk, Apple co-founder Steve Wozniak, and Stability AI CEO Emad Mostaque, as well as researchers from DeepMind.
In it, they ask that the companies that develop this type of program “immediately pause, for at least 6 months, the training of artificial intelligence systems more powerful than GPT-4.”
GPT-4 is the most advanced version of ChatGPT, one of the most powerful artificial intelligence systems in the world, developed by the OpenAI company.
Both GPT-4 and ChatGPT are a type of generative artificial intelligence, that is, they use algorithms and predictive text to create new content based on instructions.
“This pause must be public and verifiable, and include all key players. If such a break cannot be implemented quickly, governments should step in and institute a suspension“adds the text.
Issued by the non-profit organization Future of Life Institute, which counts Elon Musk among its external advisers, the statement warns that these systems can pose “profound risks to society and humanity.”
The think tank argues that powerful AI systems can generate misinformation and replace jobs with automation.
“300 million jobs” could disappear
A recent investment bank report Goldman Sachs points out that artificial intelligence could replace the equivalent of 300 million full-time jobs.
This technology could replace a quarter of work tasks in the US and Europe, he adds, but it can also mean the creation of new jobs that did not exist until now and an increase in productivity. Experts consulted by the BBC on This issue, they say that at this point it is very difficult to predict the effect that this technology will have on the labor market.
Should we develop non-human minds?
The letter signed by the experts asks the following question: “Should we develop non-human minds that could eventually outnumber us, be more intelligent, make us obsolete and replace us?”
In a recent post cited in the letter, OpenAI, the company behind GPT-4, also warned of the potential risks of the technology.
“A misaligned superintelligence could cause serious harm to the world; an autocratic regime with decisive superintelligence could do that toothe company wrote in a blog.
OpenAI has not publicly commented on the letter.
Elon Musk was a co-founder of OpenAIalthough he resigned from the organization’s board of directors a few years ago and has posted critical messages on Twitter about the direction of the company.
The autonomous driving features developed by his car company Tesla, like most other similar systems, use artificial intelligence technology.
Recently, a number of proposals for technology regulation have been put forward in the US, UK and the European Union. However, the UK has ruled out creating a dedicated regulatory body for artificial intelligence.
Remember that you can receive notifications from BBC Mundo. Download the new version of our app and activate them so you don’t miss out on our best content.