In the wake of a terrorist attack on the streets of London which resulted in seven fatalities, UK prime minister Theresa May said the threat of "Islamist extremism" could no longer be given the "safe space it needs to breed", appearing to single out tech firms for aiding terror groups.

"We need to work with allied democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremist and terrorism planning," May said, adding: "We need to do everything we can at home to reduce the risks of extremism online."

As the dust settled on the weekend (3 June) incident which left dozens the public injured, some critical, the Telegraph newspaper reported police had uncovered a "YouTube link" while probing the attack.

One source said the attackers, three men, had been "radicalised" by videos on the Google-owned platform.

The publication also said the Islamic State-inspired cell used YouTube to plot the van and knife attack in London.

Is this one example of Theresa May's safe spaces? Or, as one cybersecurity expert noted on Twitter, is it simply an evolution of the "violence is caused by playing video games" excuse?

Technology giants like Facebook and Twitter have long faced accusations of protecting terrorists' communications by offering end-to-end encrypted apps such as WhatsApp and iMessage. Of course a balance is needed, without encryption the entire security of the web would crumble.

So what exactly are these 'safe spaces', the dark corners of the internet, and can they be stopped?

Encrypted messaging apps

One of the UK government's biggest annoyances over the past few years has been the rise of smartphone and tablet applications promising strong user privacy and encryption.

Essentially, such technology protects the content of messages while in transit and at both end-points, making sure it cannot be intercepted by anyone (sometimes including those conducting bulk communications collection, like British intelligence).

The apps frequently singled out are WhatsApp, iMessage, Skype, Signal and Telegram. In some cases, terrorists have used these to communicate - but the issue is complex. As Ars Technica noted, in light of recent attacks it may be more effective to ban cars than technology.

Echoing May's latest comments, current UK home secretary Amber Rudd previously told Andrew Marr on the subject of encryption: "We need to make sure organisations like WhatsApp, and there are plenty of others, don't provide a secret place for terrorists to communicate with each other."

This was one of the controversial aspects of the UK's Investigatory Powers Bill. Cybersecurity experts – and tech firms – have long argued that breaking strong encryption would cause significant problems, leaving communications of all users at risk of snooping on a massive scale.

WhatsApp
Pro-ISIS media arm al-Battar scolds supporters for deserting the Twitter Battlefield and veering to Telegram
Microsoft to shut Skypes London offices and make most of its 400 employees redundant
Pictured (L/R) WhatsApp, Pro-Isis Twitter, Skype. Credit (L/R): iStock, Reuters (2)

Social media companies

Some of the biggest companies in the world – Google, Facebook and Twitter included - operate large social networks with encrypted chat services built-in. This has ensured high-level executives of such firms are routinely slammed for allegedly aiding the spread of terrorist propaganda.

Groups can use websites to share plans of attack, spread misinformation or recruit members to their cause. Telegram is one of the main social media methods often used to spread "official" Isis news. Twitter recently ramped up action, suspending more than 360,000 pro-terror accounts.

After the London attacks, Facebook's director of policy Simon Milner responded: "We want Facebook to be a hostile environment for terrorists." Indeed, the firm enabled its "Safety Check" feature on the night of the attack, letting those in the area notify friends and family they were alive.

A Google spokesperson said: "We employ thousands of people and invest hundreds of millions of pounds to fight abuse on our platforms and ensure we are part of the solution to addressing these challenges." The firm asserted it is already working with "industry colleagues" on the matter.

"Privacy and security are often pitted against one another in an eternal tug of war. This is perhaps one of the most complex and ethically delicate paradoxes facing our world today," said Andrew Clarke, a director at One Identity, an enterprise focused cybersecurity firm.

"There is no right answer and each government and its citizens must decide at each point in time what is right for its situation," he continued.

"What's most disappointing is that we live in a world where we have to make these decisions."

Facebook
Twitter
Google logo
Pictured (L/R): Facebook, Twitter, Google. Credit (L/R): Reuters/Dado Ruvic/Illustration (2), Reuters

The Dark Web and Tor Network

Notoriously difficult to police, the Dark Web, accessible via Tor ("The Onion Router"), lets users access the internet anonymously and stay relatively hidden from authorities by pinging a user's traditionally unique IP address to multiple locations around the world.

The dark web is used by dissidents, journalists and those living in repressive regimes to gain access to the internet safely. Unfortunately, it has a major crime problem and is also used to host marketplaces which sell everything from drugs to firearms to hacked databases or computer viruses.

In 2015, it emerged the UK had a taskforce set up to dismantle criminal operations on the network designed in collaboration with the GCHQ and the National Crime Agency (NCA). Catching criminals here requires significant operational resources, something the UK police is reportedly lacking.

"It was only a matter of time before politicians started spouting nonsense about the relationship between extremism and the internet," Lee Munson, researcher with Comparitech, told IBTimes UK.

He added: "The assertion that tech companies are in some way facilitating terrorism by not removing content in a timely manner overlooks the fact that automated methods of doing just that are not entirely accurate and human review, cost aside, is a time-consuming process.

"While online hate speech and the promotion of terrorism on the web are both in need of curtailment, knee-jerk reactions from politicians without technical experience are not the answer."

Ultimately, it remains unclear what May's criteria describes a "safe space".