The United Kingdom Parliament approved this Tuesday (19) the Online Safety Act, the text of which establishes more rigorous standards and requirements for social media platforms, such as Facebook, YouTube and TikTok. The project raises criticism for being supposedly too broad and damaging to freedom of expression and users’ privacy — the text now needs real sanction to become law.
The new legislation focuses on regulating pornography websites and rules to reduce hate speech, harassment and other illicit digital content. Furthermore, it also applies to propaganda considered terrorist, online fraud and the safety of children who use the virtual environment.
UK Science, Technology and Innovation Secretary Michelle Donelan said the bill was “game-changing” legislation. “Today, the government is taking a huge step forward in our mission to make the UK the safest place in the world to be online,” she said.
The approved project took more than five years to be developed, during which time it was reformulated amid debates about its scope and pressure from technology companies and social networks.
The approved law goes a step further than efforts in other countries to regulate online content. The text obliges companies to actively screen potentially illicit material and judge whether it is illegal, rather than requiring companies to act only after being alerted to the content, according to Graham Smith, a London-based lawyer specializing in internet law heard by the newspaper The New York Times.
The text is also criticized by freedom of expression activists and privacy groups, who assess the law as a threat to freedom of expression by encouraging companies to remove content. There are also doubts about how the law will actually be applied.
At one point in the parliamentary debates, messaging services such as WhatsApp and Signal threatened to leave the British market until devices in the project were changed that were assessed by the companies as weakening encryption standards, used by applications as a guarantee of privacy between users.
Once the bill becomes law, platforms are expected to remove illegal content or block it from appearing. They are also expected to prevent children from accessing age-inappropriate content, such as pornography, through the application of verification measures and age limits.
If companies fail to comply, UK communications regulator Ofcom could impose fines of up to £18 million or 10% of their annual global revenue.
The British political class had been under pressure to approve the bill amid growing concern about the effects of internet and social media use on young people’s mental health.
Families who blamed their children’s suicides on social media were among the most vocal supporters of the law, which also calls for restrictions on content aimed at children that promotes self-harm and eating disorders.
With Reuters and The New York Times