Google, Facebook and Twitter fail on extremism
Tech firms should employ more people to police extremist content, advises lawmakers Reuters

Tech majors like Google, Twitter and Facebook are "consciously failing" to deal with extremist content on their networks, a panel of UK lawmakers have said. The Home Affair Committee of Parliament in a recent report suggested that the companies increase their workforce to police content. The report also urged law enforcement authorities to detect and block extremist content as well as users.

Keith Vaz, Home Affairs Committee chairman and opposition Labour Party lawmaker, said in a statement: "Huge corporations like Google, Facebook and Twitter, with their billion-dollar incomes, are consciously failing to tackle this threat and passing the buck by hiding behind their supranational legal status, despite knowing that their sites are being used by the instigators of terror."

The cross-party panel said about 800 people having links to the UK have been to Syria and Iraq to join the Islamic State (Isis) terror group, but half of them have returned to the country.

In their defence, the tech firms have said they have blocked several accounts, besides removing extremist content from their sites.

"We are engaged in a war for hearts and minds in the fight against terrorism. The modern frontline is the internet. Its forums, message boards and social media platforms are the lifeblood of Daesh and other terrorist groups for their recruitment and financing and the spread of ideology," Vaz said.

Google-owned YouTube is working with the UK government and authorities to curb radicalisation through videos on its site. "We take our role in combating the spread of extremist material very seriously. We remove content that incites violence, terminate accounts run by terrorist organisations and respond to legal requests to remove content that breaks UK law," YouTube told Bloomberg.

Recently, Twitter said it has suspended as many as 360,000 accounts since mid-2015, including 235,000 accounts for violating its policies related to promotion of terrorism. In addition to account suspension, its global Public Policy Team has expanded its partnership with organisations working to counter violent extremism (CVE) online.

"We work with respected organisations such as Parle-moi d'Islam (France), Imams Online (UK), Wahid Foundation (Indonesia), The Sawab Center (UAE), and True Islam (US) to empower credible non-governmental voices against violent extremism," the social network said in a blog post.

Facebook said it is working with experts on "counter speech initiatives," in which people are encouraged "to use Facebook and other online platforms to condemn terrorist activity and to offer moderate voices in response to extremist ones."

"Terrorists and the support of terrorist activity are not allowed on Facebook and we deal swiftly and robustly with reports of terrorism-related content," Simon Milner, director of policy for Facebook UK, said.

"The report paints an inaccurate picture of the commitment of tech companies to tackle online extremism. As a number of companies made clear in their evidence to the committee, responsibilities to tackle online extremism are a serious and ongoing priority, backed by significant resources, a zero-tolerance approach, and decisive and fast action when needed," Charlotte Holloway, policy director at TechUK, which is a trade body for UK's tech industry said in an emailed statement to IBTimes UK.

"Tech companies work proactively to deal with online extremism daily, in constructive and proven partnerships with a wide range of policy-makers, the police and security agencies, and wider civil society bodies. Indeed, the vast majority of counter-terrorist operations would not succeed without the assistance and support of tech companies," added Holloway.

However, the panel said steps already taken by the tech firms are not enough given the number of users they have and the profits they make. "They must accept that the hundreds of millions in revenues generated from billions of people using their products needs to be accompanied by a greater sense of responsibility and ownership for the impact that extremist material on their sites is having," added the committee.

"If they continue to fail to tackle this issue and allow their platforms to become the 'Wild West' of the internet, then it will erode their reputation as responsible operators," it said.