For the past 30 years or so, children have been told not to believe everything they see online, but now that advice could need to be extended to adults.
According to what was published by the British “Daily Mail”, the world is currently witnessing a boom in the phenomenon of “deep faking”, where AI technology is used to process video and audio clips in a way that simulates real life with high accuracy.
dividing lines
To help further illustrate with a more transparent example, the world’s first deepfake video has been released using AI studio software Revel.ai, which appears to show Nina Schick, a professional AI consultant, as she gives a warning about how “the lines between reality and fantasy”.
Of course, it wasn’t really Nina Shaik in the video, and the video was cryptographically signed by digital authenticity company Truepic, declaring that it contained fake content created using artificial intelligence software.
Slowly and clearly, the fake video says: “Some say the truth is a reflection of our reality. We are so used to defining it with our senses. But what if our reality changes? What if we can no longer rely on our senses to determine the authenticity of what we see and are here? We are at the dawn of artificial intelligence, and it has become The lines between reality and fantasy are really blurring.”
The passage adds that in “a world where shadows are confused with reality, one sometimes needs to radically change one’s perspective to see things as they really are.”
encrypted signature
The high-resolution video ends with the message that the clip was faked using Revel.ai with the consent of Nina Chic herself, and that it was cryptographically signed by Truepic.
Deepfake technologies are a form of artificial intelligence that uses “deep learning” to process audio, images, or video, and create very realistic media content, but it’s actually a fake video.
President of Ukraine
One of the most notorious uses of deepfake technology was a crude impersonation of Ukrainian President Volodymyr Zelensky appearing to surrender to Russia in a video that circulated widely on Russian social media last year.
The clip shows the Ukrainian president speaking from his podium as he calls on his forces to lay down their arms and defer to the Russian forces. But savvy netizens immediately noticed the discrepancies between the color of Zelensky’s neck and face, the odd accent and the mismatch between the background and the ambient shadows around his head.
Recreational purposes
Despite deepfake’s entertainment value, some experts have warned of the dangers it can pose, as concerns have been raised in the past about how it could be used to create videos of child abuse or revenge against others with fake porn, as well as political hoaxes.
law Project
In November 2022, an amendment was made to the British Government’s Online Safety Bill which stated that using deepfake technology to fake pornographic images and screenshots of people without their consent is illegal.
Deep fake intelligence has the potential to undermine democratic institutions and national security, said Dr Tim Stephens, director of the Cybersecurity Research Group at King’s College London.
weapon in wars
He said the wide availability of these tools could be exploited by warring states to “phish” and manipulate target populations in an effort to achieve foreign policy goals and “undermine” the national security of other countries.
Threat to national security
Dr. Stevens added that “there is a potential for artificial intelligence and deepfakes systems to affect national security, and it is not only a matter of high level defense and war between states, but in general to undermine trust in democratic institutions and the media. Authoritarian regimes can exploit Deepfake technologies.” Vic” for falsifying videos that would lower the level of trust in the official institutions and organizations of the countries with which it is warring.
widespread spread
With the advent of freely available AI tools for converting text to image and text to video, such as DALL-E and Make-A-Video from Meta, manipulated media will become more prevalent.
In fact, it has been predicted that 90% of online content will be created using AI by 2025. For example, social media users were able to spot the truth of a fake image, supposedly created by AI, of a cat with black and yellow reptilian spots. on its body, which was declared a newly discovered species.
standard of credibility
Cybersecurity and artificial intelligence experts hope that platforms and AI companies will be forced to put a signature on content generated by their software to establish an open standard for content credibility.
Experts predicted that artificial intelligence will be an essential part of the process of producing almost all digital information, so if there is no way to authenticate this information, whether it is generated by artificial intelligence or not, it will be very difficult to deal with confidence and credibility with the digital information system.
A source of information
The experts said that “although users have not realized that they have a right to understand the source of the information they receive or see, they hope that this campaign shows that this is possible and that this is a right that they must claim.”
The encrypted digital signature generation technology complies with the new standard, developed by the Content Creation and Authenticity Alliance (C2PA), an industry body whose members include Adobe, Microsoft and the BBC, that works to tackle the spread of misinformation online.
Eliminate confusion for added security
Ms. Schick and companies Truepic and Revel.ai say their video shows it’s possible for a digital signature to increase transparency around AI-generated content, and they hope it removes confusion about the source of the video, helping to make the Internet better. A safer place.
An ethical scientist with reliability and transparency
“When an AI tool is used well, it can be an amazing medium for storytelling and creative freedom in the entertainment industry,” said Bob de Jong, Creative Director at Revel.ai. “The power of AI and the speed at which it is evolving is something the world has never seen before.” .
“It is up to everyone, including content creators, to design an ethical world with credibility and transparency for content creation so that AI can continue to be used and society can embrace it, enjoy it and not be harmed by it,” de Jong noted.