A few days ago, British Prime Minister Rishi Sunak’s advisor on technology announced that artificial intelligence could become powerful enough to “kill many people” in just two years, warning that “the existential danger that is being talked about is what will happen once we create intelligence.” bigger than humans.”
“Unless producers and developers of artificial intelligence are regulated on a global scale, ‘very powerful’ systems may emerge that humans could struggle to control,” Matt Clifford was quoted as saying by the British newspaper “The Independent”.
Expressive – iStock
Days later, the CEO of OpenAI, Sam Altman, came back to say that he was afraid of the generative AI he had helped develop. In an interview with the Times of India, Altman said that he was so stressed that he became insomniac after launching the ChatGPT AI-based chatbot.
He told reporters that OpenAI had done “a really bad thing”. He added that he did not think that the launch of “GBT chat” is a bad thing in and of itself, but that the launch of the bot is too late and he no longer has much influence on what happens next. Altman has made several statements about his concerns about the future of artificial intelligence, and concerns about competitors who might create evil algorithms. He then decided to sign an open letter warning that “artificial intelligence will cause the extinction of humanity”.
Is artificial intelligence going wild?
Amidst all these warnings that artificial intelligence could become a savage killer that could eliminate humans, British officials demanded the speed and necessity of regulating the producers and developers of artificial intelligence on a global scale, and creating very strong systems that enable everyone to control this new technology.
In response, Hassan Hamed, an Egyptian information technology expert, told Al-Arabiya.net that it is of course possible for artificial intelligence to get out of human control due to its ability to make independent decisions, but given the current state of artificial intelligence, it is unlikely that this will happen. It is in the short term.
He adds, “On the other hand, the greatest danger in artificial intelligence is not precisely its out of control, but rather its immoral use by the countries that develop it, and considering it an arms race of a new type that will impose its control on the scene in the next stage.”
The Egyptian expert stressed the need for laws that regulate work in the field of artificial intelligence not to be absent, as this will lead to their issuance retroactively after the occurrence of unforeseen consequences, given the ambiguity of the current vision of its future and the exact limits of its capabilities. He added that the inevitable danger that should be discussed now is not artificial intelligence getting out of control, but rather the frantic arms race that has already begun to control this field.
Now, dear reader, tell us your opinion on this topic in the comments.