YouTube owned by Google and Facebook is using sophisticated technology to remove extremist content from their sites, according to a new report.
Reuters reports that although the companies have not admitted to the policy, sources say the technology exists to take down videos from a database of banned content and also identify new postings related to extremist actions such as beheading or an inciting violence speech.
The sources that revealed how the technology works did not discuss the technique nor how videos were initially identified as extremist. The system can catch attempts to re-post content already identified as unacceptable, but is unable to automatically block videos that have not yet been identified.
According to them the technology will be refined over time and more and more companies are set to adopt it. It was reportedly developed to identify and remove copyright-protected content on video sites.
The latest move comes as efforts by tech companies increase to try and block their social media platforms from being used as a tool for spreading extremism and planning terrorist attacks. An analysis of several attacks since last year, from Paris to Belgium had indicated platforms such as WhatsApp, Telegram and Twitter are being used increasingly to spread extremist propaganda and even plan similar attacks. A lot of videos related to beheadings and inciting speeches were also available on YouTube and Facebook on closed groups.
Companies like Twitter have admitted blocking several pro-Isis accounts but these groups reportedly continued to function under pseudo names. For YouTube as well, even though several videos had been taken down they often re-appear from proxy accounts.
Now sources say, companies using these automation systems are not discussing the technique for fear terrorists may learn how to manipulate them.