Representatives from major tech firms Google, Facebook and Twitter have denied their digital platforms are "instrumental" in spreading terrorist ideology across the internet and stressed a firm commitment to combating online crime.
The comments were made during a UK House of Commons select committee set up to examine counterterrorism with each firm being called in front of the government to explain how they are working to curb the spread of viral content uploaded by groups such as the Islamic State (Isis), also known as Daesh.
Simon Milner, policy director at social media giant Facebook, rejected the assertions of committee chairman Keith Vaz that this firm, and the internet in general, is partly to blame for spreading terrorist propaganda. "I reject that Daesh have been a success because of our platform," he said. "Absolutely, that's not the case. We have worked very hard to disrupt what they do. Keeping people safe is our number-one priority, [Daesh] is part of that but it's absolutely not the only extremist organisation that we care about.
"Facebook has become a hostile place for Daesh and that for us is good news. It doesn't mean they don't try [to upload content] so we want to share what we have learnt about how we have made Facebook a hostile place with other platforms."
According to Dr Anthony House, head of public policy and strategy at Google, his firm has been working for years to combat terrorism and said it's become "extremely effective" at removing both accounts registered with Google and videos hosted on the popular hosting platform YouTube.
"We are extremely effective at removing both videos and accounts that are created by foreign terrorist groups and extremely effective at removing re-uploads of those videos. It's a constant cat-and-mouse game, as it is with any form of abuse of our services," he said. "We can't solve all of the world's problems but if we can make our platforms a hostile place for extremism. This is of fundamental importance, we don't want our platform to be an unsafe place and our ongoing success is critical."
Flagging the issue
House explained that since 2008 YouTube has offered a dedicated "flagging" system to allow users to report abusive or terrorist material uploaded to the web, explaining that the platform receives roughly 100,000 flags in a single day. Additionally, he revealed YouTube removed a massive 14 million videos from the service in 2014 alone.
"One of the challenges we have is that despite having a billion users they might not understand how to flag [content] effectively. For example, our most flagged video ever is a Justin Bieber video simply because people dislike it," he said.
In order to increase the risk of so-called quality flags, he said that YouTube is now working with other organisations to help report "priority" content. "We work with government agencies and NGOs to help better understand our community guidelines and give them operational tools for flagging.
"What we found is that while the general public has roughly a one third accuracy rate for flagging, these 'trusted flaggers' tend to have around a 90% accuracy. That makes it much easier to prioritise these flags and act quickly," he explained.
Meanwhile, Nick Pickles, UK public policy director for Twitter, admitted to the committee that his platform does not currently offer a dedicated option to report specific content related to terrorism. "We don't proactively notify law enforcement of potential terrorist material. If we're taking down tens of thousands of accounts, we're not in a place to judge the credibility of each of those threats," he said.
Recently, Sheryl Sandberg, Facebook's chief operating officer, said that despite doing everything it could to prevent extremist content being published, the social network still faces trouble in the battle against Daesh. To combat this, Sandberg advised users to "like" the group's posts in an effort to undermine their propaganda.