Advertisements
Home News The UK’s controversial online safety bill is now law

The UK’s controversial online safety bill is now law

by Celia

Jeremy Wright was the first of five UK ministers tasked with pushing through the UK government’s landmark internet regulation legislation, the Online Safety Bill. The current UK government likes to brand its initiatives as ‘world-leading’, and for a brief period in 2019, that may have been true. Back then, three prime ministers ago, the bill – or at least the white paper that would underpin it – outlined an approach that recognised that social media platforms were already de facto arbiters of what was acceptable speech on large swathes of the internet, but that it was a responsibility they didn’t necessarily want and weren’t always able to discharge. Tech companies were pilloried for the things they missed, but also, by free speech advocates, for the things they took down. “There was a kind of emerging realisation that self-regulation wasn’t going to be viable for much longer,” says Wright. “And so governments had to get involved.”

Advertisements

The bill set out to define a way of dealing with “legal but harmful” content – material that wasn’t explicitly against the law, but which individually or collectively posed a risk, such as health disinformation, posts encouraging suicide or eating disorders, or political disinformation with the potential to undermine democracy or cause panic. The bill had its critics – particularly those who feared it would give too much power to Big Tech. But it was widely praised as a thoughtful attempt to deal with a problem that was growing and evolving faster than politics and society could adapt. Of his 17 years in Parliament, Wright says: “I’m not sure I’ve seen anything in the way of potential legislation that had such a broad political consensus behind it.

Advertisements

Having finally passed through both Houses of the UK Parliament, the Bill received Royal Assent today. It is no longer world-beating – the European Union’s competing Digital Services Act came into force in August. And the Online Safety Act comes into force as a broader, more controversial piece of legislation than the one Wright championed. The Act’s more than 200 clauses cover a wide range of illegal content that platforms will be required to deal with, and give platforms a ‘duty of care’ over what their users – especially children – see online. Some of the more nuanced principles about the harm caused by legal but harmful content have been watered down, and a highly divisive requirement for messaging platforms to scan users’ messages for illegal material, such as child sexual abuse material, has been added, which tech companies and privacy campaigners say is an unwarranted attack on encryption.

Advertisements

Companies from Big Tech to smaller platforms and messaging apps will have to comply with a long list of new requirements, starting with age verification of their users. (Wikipedia, the UK’s eighth most visited website, has said it won’t be able to comply because it violates the Wikimedia Foundation’s principles on collecting data about its users). Platforms will have to prevent younger users from seeing age-inappropriate content such as pornography, cyberbullying and harassment; publish risk assessments of potential dangers to children on their services; and provide easy ways for parents to report concerns. Sending online threats of violence, including rape, will now be illegal, as will assisting or encouraging online self-harm or transmitting deepfake pornography, and companies will have to act quickly to remove them from their platforms, along with scam ads.

In a statement, UK technology minister Michelle Donelan said: “The bill protects free speech, empowers adults and will ensure platforms remove illegal content. But at the heart of this Bill is the protection of children. I would like to thank the campaigners, parliamentarians, abuse survivors and charities who have worked tirelessly to not only get this Bill over the finishing line, but to ensure that it will make the UK the safest place in the world to be online”.

Enforcement of the Act will be left to the UK’s telecoms regulator, Ofcom, which said in June that it would begin consultations with industry after Royal Assent. It’s unlikely that enforcement will begin immediately, but the law will apply to any platform with a significant number of users in the UK. Companies that fail to comply with the new rules could face fines of up to £18 million ($21.9 million) or 10 per cent of their annual turnover, whichever is greater.

Some of the controversy surrounding the Act is less about what’s in it than what’s not. The long passage of the legislation means that its development spanned the Covid-19 pandemic, giving legislators a live view of the social impact of misinformation and disinformation. The spread of anti-vaccination and anti-lockdown messages became an obstacle to public health initiatives. Once the worst of the pandemic was over, the same falsehoods fed into other conspiracy theories that continue to disrupt society. The original White Paper, which formed the basis of the Bill, included proposals for persuasive platforms to tackle this kind of content – which may not be illegal individually, but which is dangerous in the aggregate. That’s not in the final legislation, although the Act does create a new offence of “false communication”, making it a criminal offence to intentionally cause harm by communicating something the sender knows to be untrue.

“One of the most important things was to tackle harm that happens on a large scale. And because it’s so focused on individual pieces of content, it missed that,” says Ellen Judson, head of the digital research hub at think tank Demos. The law includes strict rules forcing platforms to act quickly to remove illegal posts – such as terrorist content or child sexual abuse material – but not disinformation campaigns, which consist of a drip-drip of misleading content, failing to understand that “when that turns into things going viral and spreading, then the damage can be cumulative”.

Wright says that the exclusion of disinformation and misinformation from the Bill was partly due to confusion between the remits of different departments. The Department for Culture, Media and Sport “was told that the Cabinet Office was going to take care of all this. ‘Don’t worry your pretty little heads about it, it’s being done elsewhere in something called the Defending Democracy agenda’,” he says. “And then I think, subsequently, it wasn’t really. So I think … there is still a gap there.”

Under the law, larger platforms will be expected to police potentially harmful but not illegal content by applying their own standards more consistently than they currently do – something that free speech campaigners have decried as giving private companies control over what’s acceptable discourse online, but which some experts on dis- and misinformation say is a cop-out that means Big Tech will be less accountable for spreading falsehoods. But legal experts say compliance with the law will require platforms to be more transparent and proactive. “They’re going to have to put all these processes in place about how their decisions are made, otherwise they risk being seen as a platform that actually controls all sorts of free speech,” says Emma Wright, head of technology at law firm Harbottle & Lewis. That’s likely to become quite a significant burden. “It’s the new GDPR,” she says.

By far the most controversial clause in the 300-plus pages of the Online Safety Act is Section 122, which has been widely interpreted as forcing companies to scan users’ messages to make sure they’re not transmitting illegal material. This would be incredibly difficult – perhaps impossible – without breaking the end-to-end encryption on platforms such as WhatsApp and Signal. End-to-end encryption means that the sender and recipient of a message can see its content, but the owner of the platform it’s sent on can’t. The only way to comply with the law, experts say, would be to install so-called client-side scanning software on users’ devices to examine messages before they’re sent, rendering the encryption largely useless. The government said during the drafting of the bill that companies could find a technical solution to scan messages without undermining encryption; companies and experts countered that the technology doesn’t exist and may never do.

“This gives Ofcom, as a regulator, the ability to force people like us to put third-party content monitoring [on our products] that unilaterally scans everything that goes through the apps,” Matthew Hodgson, CEO of encrypted messaging company Element, told WIRED before the bill was passed. “That undermines encryption and provides a mechanism for bad actors of all kinds to compromise the scanning system to steal the data that’s flying around.”

Companies whose products rely on end-to-end encryption have threatened to leave the country, including Signal. Meta said it might pull WhatsApp from the UK if the bill was passed. That cliff edge has come and gone, and both services are still available – albeit after an 11th-hour reassurance from the government that it wouldn’t force platforms to adopt non-existent technology to scan users’ messages, which was seen by some as a climbdown.

But the clause remains in the bill, worrying privacy and free speech campaigners who see it as part of a spectrum of threats to encryption. If the Online Security Act means that companies have to remove encryption or use client-side scanning to bypass it, “it potentially opens [data] up to being swept up in the wider surveillance apparatus,” said Nik Williams, policy and campaigns officer at campaign group Index on Censorship.

The Online Safety Act has worrying overlaps with another piece of legislation, the Investigatory Powers Act, which allows the government to force platforms to remove encryption. Williams says that the overlap between the two pieces of legislation “creates a surveillance gateway between the OSB and the IPA in that it can give the security services, such as MI5, MI6 and GCHQ, access to data that they didn’t have before… I would say it’s probably an unprecedented extension of surveillance powers”.

The morning after the Online Safety Bill passed the House of Lords, the UK Home Office launched a new campaign against encrypted messaging, specifically targeting Facebook Messenger.

Former minister Jeremy Wright says the issue of encryption is “frankly not resolved. I think the government has sort of dodged around giving a definitive view on what it means for encryption”. But, he says, the answer is unlikely to be as absolute as opponents of the bill make it out to be. Encryption won’t be banned, he says, but platforms will have to explain how their policies around it balance security with their users’ right to privacy. “If you can meet those [security] obligations with encryption or with encryption as part of the service, you’re fine,” he says. If not, “you’ve got a problem… it can’t be true that a platform is entitled to say, ‘Well, I’m using encryption, so that’s a get-out-of-jail-free card for me in terms of security obligations.'”

Advertisements

You may also like

logo

Bilkuj is a comprehensive legal portal. The main columns include legal knowledge, legal news, laws and regulations, legal special topics and other columns.

「Contact us: [email protected]

© 2023 Copyright bilkuj.com