Geoffrey Hinton, who is described as the godfather of artificial intelligence, on Thursday urged governments to step in to ensure that machines do not take over society.
Hinton made headlines in May when he announced his resignation from Google to speak more freely about the dangers of artificial intelligence, shortly after launching ChatGBT that captivated the world.
Essay on artificial intelligence – iStock
The AI scientist who teaches at the University of Toronto spoke to a packed audience at the Colligne Tech Conference in the Canadian city.
The conference attracted more than 30,000 founders of startups, investors and workers in the technology sector, most of whom came looking to learn how to ride the wave of artificial intelligence rather than hear lessons about its risks or calls for governments to intervene.
“Before AI gets smarter than we are, I think the people who develop it should be encouraged to put a lot of effort into understanding how it might try to take away control from us,” Hinton said.
“Now there are 99 very smart people trying to make AI better, and one very smart person trying to figure out how to stop it from taking over, and maybe you should be a little more level-headed,” he added.
Hinton warned that the dangers of artificial intelligence must be taken seriously.
He continued, “I think it is important for people to realize that this is not science fiction or just raising fears,” stressing, “It is a real risk that we must think about carefully, and we must come to know how to deal with it in advance.”
Hinton also expressed concern that AI would deepen inequality, as the massive gains from its productivity would go to the rich rather than workers.
“The wealth will not go to the people who work, it will make the rich richer, not the poor, and this is a very bad society,” he said.
He also noted the risk of “fake news” that could be generated by bots such as ChatGBT, and expressed his hope that content generated by AI will be flagged as central banks mark fiat currencies.
“It is very important that we try, for example, to mark everything that is fake as fake. I don’t know if we can do that technically or not,” he emphasized.