Political pressure increasing on social media companies to police their platforms
Over the course of a week in August, the UK experienced widespread civil unrest with crimes such as rioting, vandalism, looting and assault occurring in various places across the country. Amongst the arguments over the reasons behind the disturbance, questions were soon raised about the role of social media in propagating and inciting the violence.
Political pressure has been growing over the summer as the UK government stated that tech companies must deal with content and misinformation on their platforms that could contribute to such unrest. Indeed, public opinion also seems to leaning in that direction, 66% of Britons saying that social media companies should be held responsible for posts inciting criminal behaviour during the recent unrest, according to a YouGov poll.
The debate adds further pressure on tech companies – and keeps the door open to tighter regulatory or potentially criminal liabilities – in light of the UK’s Online Safety Act 2023, which is a partially implemented new set of laws aimed at protecting people online.
Online Safety Act 2023
The Act was passed into law on 26 October 2023 and includes a slew of new responsibilities for tech and social media companies. and search services, making them more responsible for their users’ safety on their platforms. The law includes new criminal offences, which came into force on 31 January, including:
- encouraging or assisting serious self-harm
- cyberflashing
- sending false information intended to cause non-trivial harm
- threatening communications
- intimate image abuse
- epilepsy trolling
Out of these, two of the most important in this context are false information and threatening communications. In both scenarios, “serious harm” is intended, including grievous bodily harm and serious financial loss. However, both of these offences are aimed at the individuals posting rather than the platforms hosting. For tech companies, the Act instead has duties of care against illegal content, including inciting violence, and must put in place measures to reduce such content. Platforms must also remove any other illegal content where there is an individual victim (actual or intended), where it is flagged to them by users, or they become aware of it through any other means.
The task of regulating online safety falls on the communications watchdog, Ofcom. Any companies found to not have followed the process can be fined up to £18 million or 10 percent of their qualifying worldwide revenue, whichever is greater. In the case of tech giants, the latter amount could potentially run into billions of dollars.
Other sanctions can include criminal action against senior managers who fail to ensure companies follow information requests from Ofcom, although the criminal liability for companies only extends, at this point, to child safety duties.
Self-regulation?
With calls to extend the Online Safety Act, tech companies will need to keep a close eye on the rules and regulations should they be broadened. Meanwhile, companies would be wise to check their compliance with the Online Safety Act regime, and to carefully consider whether now is the time to moderate content. We have seen serious crimes committed online by those who incited the riots, with repeat offences possible at any time.