The Online Safety Bill will be returning to parliament next Monday. The government has announced several significant changes to it, which will please some, though it has led to accusations of being watered down by others. In this article we take a look at what will be changing.
‘Legal but harmful’ provisions removed
The Bill included requirements to deal with content which was legal but harmful. These were unpopular and will be removed from the Bill. They caused significant concern that the Bill would censor speech online that is legal to say in person.
The government says that the amended Bill will introduce new overarching transparency, accountability and free speech duties for the Category 1 services that result in stronger action to protect children, as well as greater choice for adult consumers over the types of content they see.
Platforms will not be able to remove or restrict legal content, or suspend or ban a user, unless the circumstances for doing this are clearly set out in their terms of service or are against the law. If they do remove any content or ban a user, they will be required to offer users an effective right of appeal. This aims to protect against companies arbitrarily removing content or banning users, and provide due process if they do.
Tech platforms may set any terms of service they wish. According to the government, they tend to already have robust and, in many cases, extremely detailed, comprehensive policies in place prohibiting abuse and other harmful content. However, the government believes that they are not always properly enforced and so is making changes to require platforms to consistently enforce their user safety policies.
The Bill will still aim to prevent a wide range of criminal activity and illegal content online. This includes the most prevalent and serious online hate crime as well as fraud, assisting suicide, threats to kill, harassment and stalking, the sale of illegal drugs or weapons and revenge pornography.
The government has also confirmed it will use the Online Safety Bill to create a new criminal offence of assisting or encouraging self-harm online. The Law Commission’s recommended offence would target intentional encouragement or assistance of self-harm at a high threshold, equivalent to grievous bodily harm. This is designed to combat malicious behaviour deliberately encouraging vulnerable people to harm themselves.
The Bill will also provide that adults have tools to control what sort of content appears on their feeds. They will also be able to block anonymous trolls - via tools to control whether they can be contacted by unverified social media users. The government is calling this a “triple shield”: social media firms will be legally required to remove illegal content, take down material in breach of their own terms of service, and provide adults with greater choice over the content they see and engage with.
There will also be better reporting mechanisms for when adults see content that is illegal, harmful to children or should have been removed in line with terms of service, with platforms expected to process and resolve complaints more quickly. Another amendment will add the criminal offence of controlling or coercive behaviour to the list of priority offences in the Bill. This aims to help protect girls and women in particular.
Child safety duties
The government says that the strongest protections in the Bill are for children. Companies in scope will need to protect young people from content and activity posing a material risk of significant harm. This includes taking preventative measures to tackle offences such as child sexual exploitation and abuse.
An amendment will require social media platforms to publish their risk assessments on the dangers their sites pose to children. Previously, the Bill required platforms to carry out these assessments but not to proactively publish them.
In addition, Ofcom will be given the power to make companies publish details of any enforcement notices they receive from the regulator for breaching their safety duties under the Bill.
A further amendment will make platforms’ responsibilities to provide age-appropriate protections for children clearer. Where platforms specify a minimum age for users, they will now have to clearly set out and explain in their terms of service the measures they use to enforce this, such as age verification technology. Such technology often uses AI which may be subject to bias, so this may cause problems in itself.
Harmful communications offence
Some were troubled by the introduction of a harmful communications offence. The government has decided to remove it with the aim of ensuring that the Bill’s measures are proportionate and do not unduly criminalise content that some may find offensive. However, to retain protections for victims of abuse, the government will no longer repeal elements of the Malicious Communications Act and Section 127 of the Communications Act offences. To avoid duplication in legislation, the government will remove elements of the offences in those Acts which criminalise false and threatening communications. This aims to maintain protection from harmful communications, including racist, sexist and misogynistic abuse.
The proposed other new offences on false and threatening communications will remain in the Bill. The false communications offence captures any communications where the sender intended to cause harm by sending something knowingly false, while the threatening communications offence will capture communications which convey a threat of serious harm, such as grievous bodily harm or rape.
Other criminal measures
The Law Commission was asked by the Ministry of Justice to undertake a review of the laws around intimate image abuse in 2019, following calls to make it easier to prosecute those who take or share sexual, nude or other intimate images of people without their consent. In July 2022, the Commission published its final recommendations for reforming the law. This included a new framework of offences designed to address all forms of intimate image abuse, including criminalising “downblousing” and sharing pornographic “deepfakes” and “nudified” images without consent.
Last week the MOJ confirmed its intention to implement the Law Commission’s recommendations. In the immediate term, it will put forward an amendment to the Online Safety Bill, which would criminalise the sharing of a person’s intimate images without their consent. Separate to the Online Safety Bill, the government will also bring forward a package of additional laws to tackle a range of abusive behaviour including the installation of equipment, such as hidden cameras, to take or record images of someone without their consent.
The government is also tabling an amendment to cover a flashing images offence- this is to prevent so-called epilepsy trolling.
In addition, the Victim’s Commissioner, Domestic Abuse Commissioner and Children’s Commissioner will be added as statutory consultees in the Bill, meaning Ofcom must consult with each when drafting the codes of conduct that tech firms will be required to follow.
To ensure the proposed changes go through proper scrutiny, the government says that it will return several clauses back to a Public Bill Committee for consideration. Some amendments such as the ones affecting children, will be tabled in the Commons, and others in the Lords.
“The changes will offer users a ‘triple shield’ of protection when online: social media firms will be legally required to remove illegal content, take down material in breach of their own terms of service, and provide adults with greater choice over the content they see and engage with.”