In late November, UK Parliament announced a significant move away from their initial recommendations of the Online Safety Bill (‘the Bill’) by dropping the requirement for tech giants to remove content that is ‘legal but harmful’. The decision has raised concerns amongst critics that this is ‘watering down’ the previous draft of the Bill, whereby providers of regulated user-to-user services would have been subject to actively addressing legal but harmful content on their platforms. However, the revised Bill removes those duties and moves towards favouring public risk assessments where users will be able to access tools to control the content that they consume in a bid to protect freedom of speech.
Whilst on one hand this seems like a positive step to protect freedom of speech, on the other, putting the onus on Internet users to serve as autonomous content moderators and reporters is a burdensome requirement; it is likely that in most instances users will simply scroll past or disregard the harmful content. Beyond the shift from a proactive to reactive approach it also raises concerns as to whether providers will take the necessary steps to remove harmful content once flagged by a user. Whether shifting the duty onto users is practical and will protect consumers or merely serves as a scapegoat for providers to host content without moderating their platforms comprehensively remains to be seen.
Latest update: November 2022
Predicted timeline: The Bill is due to be passed by April 2023.
Online Brand Enforcement / Domains / Tech
Found this article interesting today? Send us your thoughts:
Stobbs (IP) Limited, trading as Stobbs, registered in England and Wales, Company number 08369121. Registered Office: Building 1000, Cambridge Research Park, Cambridge, CB25 9PD. VAT Number 155 4670 01. Stobbs (IP) Limited and its directors and employees who are registered UK trade mark attorneys are regulated by IPReg www.ipreg.org.uk