Facebook, Twitter, YouTube and Microsoft have agreed to new EU regulations that requires them to review and remove hate speech on their platforms within 24 hours of being notified. The new code of conduct, drafted in partnership with the European Commission, is an effort to stop social media being used as a platform for spreading xenophobia and extremist propaganda.
European internet companies have had a difficult time on their hands trying to quell the increase in harmful content online, particularly in wake of the recent terror attacks in Paris and Brussels. As well as having to combat the racist backlash that invariably follows such events, companies such as Facebook and Twitter have unintentionally provided terrorist groups with platforms to spread their extremist views..
Vĕra Jourová, EU Commissioner for Justice, Consumers and Gender Equality, said: "The recent terror attacks have reminded us of the urgent need to address illegal online hate speech.
"Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and racist use to spread violence and hatred.
"This agreement is an important step forward to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected."
Under the new code of conduct, Facebook, Twitter, YouTube and Microsoft are required to review "the majority" of hate speech removal requests within 24 hours and remove or disable access to content where necessary. They must also put in place "clear and effective" processes for reviewing potentially illegal content on their platforms.
Additional training will be doled out at the companies so that staff can deal with incidents quickly and efficiently and Facebook, Twitter, YouTube and Microsoft will share best practices with each other.
The companies must also raise awareness amongst users about the types of content not permitted on their sites, based on a clear set of community guidelines. In addition to swiftly removing harmful content, Facebook, Twitter, YouTube and Microsoft will attempt to counter hateful rhetoric by promoting "independent counter-narratives, new ideas and initiatives and supporting educational programmes that encourage critical thinking."
While social media companies claim to have stepped up their efforts to combat harmful content, critics argue that their responses have been inadequate.
In France, Facebook, Twitter and YouTube have been taken to court by civil rights organisations for failing to remove the bulk of some 586 racist, homophobic and xenophobic posts reported to the companies between late March and 10 May 2016.
Twitter has found itself in a particularly troublesome spot, with its own executives deriding the company's ability to tackle abuse. The company has since established a safety council tasked with monitoring the website for dangerous content has has also performed a major crackdown on accounts linked to terrorist organisations.
Karen White, Twitter's head of public policy for Europe, said: "Hateful conduct has no place on Twitter and we will continue to tackle this issue head on alongside our partners in industry and civil society. In tandem with actioning hateful conduct that breaches Twitter's Rules, we also leverage the platform's capabilities to empower positive voices, to challenge prejudice and to tackle the deeper root causes of intolerance.
"We look forward to further constructive dialogue between the European Commission, member states, our partners in civil society and our peers in the technology sector on this issue."
The new code of conduct has already drawn criticism from some civil rights groups, however, who claim that the new rules allow private companies to police the internet according to their own guidelines.
Internet advocacy organisation European Digital Rights (EDRi) and international rights group Access Now called the code of conduct "ill conceived" and announced they would be withdrawing from future discussions with the European Commission regarding internet policy.
They said in a joint statement: "In short, the 'code of conduct' downgrades the law to a second-class status, behind the 'leading role' of private companies that are being asked to arbitrarily implement their terms of service. This process, established outside an accountable democratic framework, exploits unclear liability rules for companies. It also creates serious risks for freedom of expression as legal but controversial content may well be deleted as a result of this voluntary and unaccountable take down mechanism.
"This means that this 'agreement' between only a handful of companies and the European Commission is likely in breach of the EU Charter of Fundamental Rights, under which restrictions on fundamental rights should be provided for by law. It will, in practical terms, overturn case law of the European Court of Human Rights on the defence of legal speech.
"Countering hate speech online is an important issue that requires open and transparent discussions to ensure compliance with human-rights obligations. This issue remains a priority for our organisations and we will continue working for the development of transparent, democratic frameworks."