Billionaire Elon Musk warned that AI (artificial intelligence), if programmed by people from the “environmental movement”, could lead to the extinction of humanity.
The phrase was said during an episode of the podcast presented by comedian Joe Rogan on Tuesday (31). Musk said some people would use technology to protect the planet by ending human life.
The owner of X (formerly Twitter) made the statement shortly before participating in a cybersecurity summit in the United Kingdom.
During the event, Musk is expected to meet with Rishi Sunak, the Prime Minister of the United Kingdom.
Many experts consider his warnings exaggerated.
Nick Clegg, Meta’s global affairs chair and former deputy prime minister — who is also attending the summit — said people should not allow “speculative, sometimes somewhat futuristic forecasts” to exclude more immediate challenges.
Musk said his comments were born out of fear that the environmental movement was “going too far.”
“If you start thinking that human beings are evil, then the natural conclusion is that they should die,” Musk said.
“If AI is programmed by those who advocate the extinction of our species, the useful function of the technology will be the extinction of humanity… And they won’t even think that’s a bad thing.”
Musk is expected to speak with the Prime Minister of the United Kingdom, Rishi Sunak, on platform X next Thursday (2).
Representatives from some of the world’s most powerful countries are attending the summit, including — unusually — China, which appears as a key component in AI.
Relations between China and many European and North American countries are tense in several areas of technology, but the country’s vice minister, Wu Zhaohui, said he sought a spirit of openness regarding AI.
“We call for global collaboration to share knowledge and make AI technologies available to the public,” said Zhaohui.
“Countries, regardless of size and strength, have equal rights to develop and use AI. We should increase the representation and voice of developing countries,” he added.
AI is ‘very important’
While few people share Musk’s interpretation of the threat of AI, many agree that it poses some potential dangers.
In a speech before the summit, Demis Hassabis, co-founder of Google Deepmind, one of the UK’s largest AI companies, said the “move fast and get things done” mantra commonly associated with Silicon Valley should be avoided in this case.
“Building large companies and providing many services has been extremely successful, with excellent applications,” Hassabis said.
“But AI is very important. There is a lot of work that needs to be done to ensure we understand [os sistemas de IA] and we know how to implement them safely and responsibly.”
It identified potential risks, including the risk of AI generating misinformation and deepfakes, in addition to the deliberate misuse of technology.
Security discussions
Over the next two days, around a hundred world leaders, technology chiefs, academics and AI researchers will gather at the Bletchley Park campus in the UK.
The site was once home to experts who helped crack German encryption codes and ensure Allied victory during World War II.
They will participate in discussions about how best to maximize the benefits of artificial intelligence — such as discovering new medicines and working on potential solutions to climate change, for example — while minimizing risks.
The summit will focus on the extreme threats posed by so-called frontier AI, the most advanced forms of technology that Hassabis described as the “tip of the spear.” The event will also debate the threat of bioterrorism and cyber attacks.
International delegates include US Vice President Kamala Harris and European Commission President Ursula von der Leyen.
Some criticism points out that the guest list is dominated by US giants, including the creators of ChatGPT OpenAI, Anthropic, Microsoft, Google and Amazon — as well as the owner of Tesla and X (formerly Twitter), Elon Musk.
Other critics question whether announcements made earlier this week by both the US and the G7, specifically on AI safety, overshadowed the event — but Hassabis understands that the UK can still play “an important role” in the discussions.
‘It looks like science fiction’
Aidan Gomez, the founder of Cohere, an AI platform for businesses, came to the UK from Toronto, Canada to attend the summit. His company was valued at US$2 billion (R$10 billion) in May 2023.
He said he believed there were more immediate threats than the “Terminator apocalyptic scenario.” Gomez described this possibility as “kind of science fiction.”
“In my personal opinion, I would like us to focus more on the short term, where there is concrete policy work to be done,” he said.
“The technology is not prepared to, for example, prescribe medication to patients, where an error could cost a human life.”
“We really need to preserve human presence and oversight of these systems… We need regulation to help guide what are acceptable uses of this technology.”
This text was originally published here.