Scammers, after hacking into social media accounts, use artificial intelligence (AI) to clone the victim’s features and voice and publish fake videos in order to carry out new scams.
Criminals use investments with incredible financial returns or the sale of furniture at impractical prices as bait and certify the offer with the credibility of the person whose profile was stolen.
This technology has become known in recent years as deepfake and is becoming more accessible every day with the popularization of generative AIs, those like ChatGPT. Today, five minutes of audio is enough to copy a voice with acceptable quality.
This is how some of Fábio José’s friends, 31, transferred money to a ghost account on Instagram. To do this, they were tricked into passing amounts to the fake investment advisor @jaqueline_.investimentos.
The report spoke to the alleged agent and heard that the victim of the scam had paid for the services, as indicated in the advertisement made with deepfake, which was contested by José.
The case is not unique, at X (formerly Twitter), the Sheet found 12 complaints about similar situations in Brazil alone. In the United States, a scam gained prominence that used the face of the most popular influencer in the world, MrBeast, who has 201 million subscribers on his YouTube channel.
The video featuring a clone of James Stephen “Jimmy” Donaldson, the owner of the MrBeast channel, offered iPhones 15 in exchange for ridiculous amounts on TikTok. What would be an implausible approach becomes more persuasive when you consider that the celebrity distributes prizes to those who overcome challenges on his channel.
Donaldson is known for splurging on luxurious situations and philanthropic deeds. His videos are translated into several languages to increase their reach.
The criminals used the influencer’s fame not only to create lives on TikTok with deepfakes of MrBeast, but they also produced advertisements with his face, with a promise of US$500 (around R$2,500) in exchange for registration.
Artificial intelligence that generates audio and video makes it possible to take the famous email scams, known as phishing, to video and audio, in a believable way, says Sheet the researcher at the cybersecurity company Tenable, Satnam Narang.
In these schemes, the main tactic is so-called social engineering — criminals use deceit to receive transfers from their targets spontaneously. This also reduces the chance of the bank blocking the transaction due to anomalous behavior, as occurs in other scams.
Cybersecurity researchers say criminals can simulate a deepfake without having to resort to artificial intelligence, using videos of people out of context. Then, they can remove a passage like “I invested everything in this” and publish it to carry out scams.
The only way to prevent yourself from the risk of being cloned on the internet is to restrict the circulation of images and audio of yourself, according to the founder of the cybersecurity company Tempest, Licoln Mattos. One option is to make the account private and limit viewing of posts to friends. The other is to avoid publishing.
A Sheet found that Nubank, Bradesco and Itaú —three of the six banks in Brazil with the most individual customers— have already received complaints related to the scams described in the report and adopt their own protocols to decide in which cases to reverse the amounts extorted from their customers in the scam.
With another form of artificial intelligence, financial institutions can determine accounts with a high chance of involvement in scams and send risk warnings to customers. When pixing strangers, this is a relevant warning sign.
These notices can arrive via WhatsApp, as is the case with Bia do Bradesco, or in the application itself, as Nubank does.
It is also possible to dispute the transaction and for this it is important to have documented step by step how the scam went. People with knowledge on the subject heard by the Sheet state that in cases where the customer chooses to ignore the risky transaction warning, it is more difficult to receive a refund for the transfer.
People with knowledge of the internal chargeback policy say that financial institutions do not disclose the criteria assessed for returning money for security reasons.
Source: Central Bank (Jul.23)
Banks also say they promote fraud awareness campaigns and recommend that customers add two-factor security to make account theft more difficult.
To avoid falling for these scams, people, in addition to being suspicious of offers that are too good to be true, also need to pay attention to who they talk to. Try to understand whether the interlocutor’s behavior is coherent or, if the scammer pretends to be a representative of an institution, whether the channel is official.
Bradesco’s technology director, Cíntia Scovine Barcelos, tells Sheet that today it is a greater concern to identify accounts involved in scams that use AI, than to have their systems directly circumvented with artificial intelligence.
“Today, facial recognition considers several factors, such as light, movement, biometric details, which are still difficult to overcome with images generated by computers, capable of deceiving the human eye”, he says.
Still, Barcelos states that the advent of artificial intelligence represents a risk to the security of institutions. “It’s easier to write code that finds vulnerabilities in a system with the help of ChatGPT than it was before.”
This not only brings new players into the cybercrime market, but also allows established criminals to increase the volume of cyberattacks.