Violence against children on the internet crosses borders, is on the rise and, therefore, must be faced as if it were a pandemic, says the executive director of the global alliance WeProtect, Iain Drennan. “How can we prevent and stop this from happening?” is the essential question.
“There are chances of a child in Brazil becoming the target of a criminal from the United Kingdom”, explains the activist. No government is capable of tackling this problem alone, he adds.
WeProtect brings together 102 countries, 67 companies, 93 civil society organizations and ten intergovernmental organizations to outline strategies against abuse directed at minors. Irish, Drennan spoke to the Sheetvia videoconference, from London, where the alliance office is based.
The group detected an 87% increase in reported cases of sexual and psychological violence against children and adolescents online between 2019 and 2022, in a review of data with sources such as Unicef and the Special Representation of the UN General Secretariat on Violence Against Children. Threats include harassment, creation of sexualized images using artificial intelligence and extortion — in 2022 alone, there were around 32 million episodes.
For Drennan, the response to this “global public health disaster” must involve internet platforms (social networks and games), local governments (with regulation and supervision), parents and children themselves, who must have the necessary tools to browse the networks freely and safely.
“There needs to be a culture of compliance so that companies, if they are operating and making money in Brazil, need to comply with local legislation if they want to continue operating”, says the executive director of WeProtect.
Among the risks for children and adolescents on the internet horizon, the entity highlights image-generating artificial intelligence platforms, as they allow criminals to create erotic scenes of minors.
Synthetic images complicate the response to abuse by imposing an additional step on investigations: identifying whether the photo is real or not. “Aggressors are using AI to bypass protective measures and facilitate abuse,” says Drennan.
Furthermore, they can use this technology to extort young people, without having to convince them to send embarrassing images. According to the WeProtect report, many cybercriminals pretend to be girls in the online world and predominantly target boys aged between 15 and 17 through social media.
Research indicates that criminals take, on average, 45 minutes to groom minors.
The Chamber of Deputies approved last Thursday (7) a bill that criminalizes generating intimate images of women with artificial intelligence and, when the practice targets minors, the penalty would be increased. The text, however, does not deal with men victimized by this abuse. The project is now under consideration in the Senate.
The WeProtect report shows that LGBTQIA+, ethnic minority or disabled children and young people are at greater risk. “These groups end up especially exposed to child sexual abuse, since inequality also extends to care and access to information.”
There is a gap between how children experience the internet and how parents or educators evaluate this experience that needs to be overcome with education on both sides of this link, according to the activist.
As a good example of a protective measure against abuses committed with artificial intelligence against children, Drennan cites Australian legislation, implemented in September. The law requires big tech companies to take steps to ensure that AI products cannot be used to generate false images and videos about child sexual abuse.
“These risks reinforce the urgency for platform developers to think about security from the beginning of the project”, states WeProtect, in its report.
The proposal being processed in Congress does not hold technology developers responsible.
On the other hand, Drennan states that risks should not inhibit children’s freedom on the internet and the benefits of technology. “We saw during the Covid pandemic how important this was. We know that, online, young people are socializing, exploring sexuality, but we need to minimize the risks. This requires governments, companies and communities to come together to develop solutions.”
In good practice, children themselves should have a choice about the privacy options available in a game or social network, according to the expert. This degree of interference can evolve as the young person gets older.
For WeProtect, TikTok’s child protection policy has set a good precedent. Users aged 13 to 17 have different access and default settings.
Young people aged 13 to 15 have accounts with stricter privacy guidelines, starting from registration. Certain features can only be accessed from the age of 16, such as private chat. The measure reduces the risk of random encounters with strangers on the internet.