European Union ministers have approved plans on Tuesday (23 May) to force social media companies such as Facebook, Twitter and Google to deal with videos containing hate speeches that have been posted on their platforms.

The proposals will be the first legislation at EU-level targeting the online world. They will however have to be agreed with the European Parliament before becoming law.

The approval comes just a day after a suicide bomber killed at least 22 people and injured dozens others at an Ariana Grande concert in Manchester.

Under the proposal, social media platforms that offer videos as an "essential part" of their services, will have to take measures to block videos with hate speech, incitement to hatred and content justifying terrorism from their sites.

This could include setting up mechanisms for users to flag such content, Reuters reported.

Facebook
Hundreds internal documents from Facebook reveal the policies that guide moderators on what content users can and cannot post on the site. Reuters/Philippe Wojazer

"We need to take into account new ways of watching videos, and find the right balance to encourage innovative services, promote European films, protect children and tackle hate speech in a better way," said Andrus Ansip, EU Commission vice president for the digital single market.

Mashable UK said that although no timeline has been put forward for the parliamentary vote on the proposal, Ansip has said that it would happen "in the coming weeks."

The proposals put forward will not cover live streaming like Facebook Live. It will be confined to videos stored on the social media's platform, an EU diplomat told Reuters.

The proposals also include the imposition of a 30% quota of European films and television shows on video streaming platforms such as Netflix and Amazon Prime Video. Previously, the European Commission had proposed a 20% quota, Reuters said.

Member states will also be able to request video-sharing platforms to contribute financially to the production of European works in the country where they are established and also where they target audiences.

The ruling is part of an update to the EU's Audiovisual Media Services Directive issued a year ago that dealt with online hate speech as well as the protection of children from offensive online content. It also required investment in European-made content distributed online.

Social media platforms struggling to police their sites

A recent leak into how Facebook polices postings on its platform have shown that is a difficult task. It has nearly two billion users and 4,500 moderators have just a matter of seconds to decide whether to allow a posting.

A Guardian report highlighted how guidelines on Facebook postings show the reluctance of the social media platform to delete videos of death and self-harm because it does not want to censor its users.

Google's YouTube faces a similar challenge. Recently, local media reported how big companies were advertising next to extremist videos promoting hate, violence and terror.

Mashable UK noted that Germany recently approved a bill that would see social platforms slapped with fines of up to €50m if hate speech is not removed from the sites within 24 hours of being flagged.