A computer scientist has reportedly developed a new technology that can be used by social media platforms to help combat the spread of terrorist content and propaganda online.
Dartmouth University professor Hany Farid, working alongside the non-profit Counter Extremism Project (CEP), claims to have built new cutting-edge software capable of automatically flagging photos, videos and even audio files uploaded by groups like Al-Qaeda or the so-called Islamic State (Isis).
The project, part-funded by Microsoft, uses a technique called 'robust hashing' to identity jihadist content – even if it has been altered before being uploaded to the web. The CEP has built a massive database full of terrorist-related content – imagery, videos, and audio recordings – to help kick off the tracking process.
The non-profit is planning to create a new organisation – dubbed the National Office for Reporting Extremism - to oversee the database and, if all goes to plan, work with social media companies , many of which have long had a problem with fending off use by terrorist sympathisers.
"If we seize this opportunity and have partners across the social media spectrum willing to fight the extremist threat by deploying this technology, extremists will find Internet and social media platforms far less available for their recruiting, fundraising, propagandising, and calls to violence," Farid said.
"It is no longer a matter of not having the technological ability to fight online extremism, it is a matter of the industry and private sector partners having the will to take action."
The software and its algorithms have been tested and are now being set for a launch which, according to Farid, is still a number of months away. To date, no social media firms have signed up to use the software, however – according to Bloomberg – talks are underway with Facebook.
Software 'recognises' jihadi images
The software is an evolution of a previous project called 'PhotoDNA' which was developed by the Dartmouth professor to help combat the spread of child pornography online. By extracting the 'digital signature' from images, Farid developed a method of running these unique signatures against a large pool of stored content that can flag or report as inappropriate.
He told DefenseOne: "Deep learning, and other related modern-day learning algorithms are not capable of quickly and reliably distinguishing between appropriate and inappropriate content. The speed and error rate for these approaches is simply prohibitive for dealing with Internet-scale image uploads." So, to circumvent this problem, Farid said the answer was to teach software to 'recognise' images after they had been flagged rather than to understand "the concept of abuse."
Now, focusing on the jihadi content, he said firms using the software will not only have access to the CEP database, but also be permitted to add their own signatures.
"We have known this for a long time, and despite the good intentions of social media companies, the problem [of extremist content] has only gotten worse," said Mark Wallace, chief executive of CEP, who has previously served in the US government including the Department of Homeland Security (DHS).
"We believe we now have the tool to reverse this trend, which will significantly impact efforts to prevent radicalisation and incitement to violence. The technology now exists to quickly and efficiently remove the most horrific examples of extremist content. We hope everyone will embrace its potential."
All social media platforms have rules against posting extremist content, however the nature of the websites – easy, free signups – mean that ridding the networks is harder said than done. As previously reported, Twitter recently removed over 125,000 accounts for posting Isis-related content.
In February, representatives from major technology firms Google, Facebook and Twitter were summoned by the UK parliament to outline steps they were taking to combat online crime and malicious content.
Simon Milner, policy director at social media giant Facebook, rejected assertions that this firm is partly to blame for spreading propaganda. "I reject that Daesh have been a success because of our platform," he said. "Absolutely, that's not the case. We have worked very hard to disrupt what they do. Keeping people safe is our number-one priority."
In June, Facebook, Twitter, YouTube and Microsoft all signed a new EU code-of-conduct that requires them to "review and remove" hate speech on their platforms within a day of being notified.