Meta's founder and chief executive Mark Zuckerberg has put most of his attention on the company's AI innovations
AFP News

Meta, the social media giant behind Facebook and Instagram, is facing significant criticism following recent changes to its content moderation policies. According to Meta's own Q1 2025 Integrity Report, online harassment and hate speech have risen since the company implemented a major moderation shake-up earlier this year.

This surge has sparked concerns among regulators, users, and advocacy groups, raising questions about the balance between free expression and online safety.

Meta's Content Moderation Overhaul: What Changed?

In early 2025, Meta announced a sweeping revision of its content moderation policies aimed at prioritising free speech. The company reduced its reliance on automated systems to remove content and narrowed its definition of hate speech to focus only on direct and dehumanising attacks.

Additionally, Meta replaced third-party fact-checkers with community-sourced notes—a method similar to that used by rival platform X, though with stricter guidelines and oversight mechanisms in place.

Meta claims these changes have led to a 50% decrease in moderation errors, meaning fewer posts were wrongly taken down. However, the cost of this shift appears to be a rise in harmful content slipping through the cracks.

Surge in Online Harassment and Hate Speech

As reported by Meta's Q1 2025 Integrity Report, bullying and harassment content increased slightly but noticeably following the moderation update. Bullying content rose from 0.06–0.07% to 0.07–0.08%, while violent content on the platforms also saw an uptick to 0.09%. These figures may seem small but represent millions of additional harmful posts viewed by users.

This increase is largely attributed to Meta's effort to reduce censorship and errors, yet critics argue the company has sacrificed user safety in the process. Many users have voiced frustration over the perceived lack of protection from abusive content, with some calling for more robust moderation measures.

Regulatory Backlash and Public Opinion in the UK

The moderation shake-up comes at a time when governments, especially in the UK, are tightening regulations on online platforms. The UK's Online Safety Act, set to take effect in July 2025, mandates social media companies to swiftly remove illegal content such as terrorist material, child sexual abuse, and fraud. Failure to comply could result in fines of up to £18 million or 10% of global revenue, and in extreme cases, service shutdowns.

Public opinion supports tougher moderation measures. A global survey found that 79% of respondents agree that incitements to violence should be removed from social media platforms. In the UK, the debate continues on finding the right balance between protecting free speech and ensuring users are safe from digital abuse and misinformation.

What Lies Ahead for Meta?

Meta's moderation shake-up has ignited a crucial debate on how social media platforms should manage harmful content without overstepping into censorship. With regulatory bodies like Ofcom increasing scrutiny under the UK Online Safety Act, Meta faces growing pressure to refine its approach.

The company is likely to continue evolving its policies to better address the concerns of both users and regulators. Ultimately, the challenge lies in creating an online environment that fosters free expression while safeguarding against harassment and abuse—a balance that is far from easy to achieve.