We have noticed you are using an ad blocker
To continue providing news and award winning journalism, we rely on advertising revenue.
To continue reading, please turn off your ad blocker or whitelist us.
If terrorists use Twitter to promote their cause, is Twitter then responsible for terrorist acts? That theory, in short, is what a new lawsuit is advancing.
Tamara Fields is an American woman whose husband Lloyd was killed in Jordan on 9 November, when a gunman entered an international police training centre in Amman and shot and killed five people. It was a "lone wolf" attack by a Jordanian police officer.
The lawsuit filed by Fields this week contends that "for years, Twitter has knowingly permitted the terrorist group Isis to use its social media network as a tool for spreading extremist propaganda, raising funds, and attracting new recruits. This material support has been instrumental to the rise of Isis and has enabled it to carry out numerous terrorist attacks". Fields wants Twitter to pay unspecified damages for violating the federal Anti-Terrorism Act by having provided material support to terrorists.
Following the Jordan attack, the lawsuit says Isis claimed responsibility and issued this statement: "Do not provoke the Muslims more than this, especially recruited and supporters of the Islamic State. The more your aggression against the Muslims, the more our determination and revenge…time will turn thousands of supporters of the caliphate on Twitter and others to wolves."
The suit claims that there were roughly 70,000 Isis-related Twitter accounts as of December 2014, of which at least 79 were "official", tweeting 90 times a minute. (The Brookings Institution think tank has estimated that Islamic State supporters operated at least 46,000 Twitter accounts between September and December 2014).The lawsuit claims that Twitter has done little to stop Isis from using its service.
While this claim may be true, legally it is shaky ground. There is no doubt that Twitter is a useful recruitment and propaganda platform for Isis, but it is just that: a platform. It is very difficult, if not impossible, to prove a direct link between Isis tweets and acts of terror. What the lawsuit could do is focus attention once more on the debate about social media platforms' responsibility to police terrorism.
Isis' social media savvy is well-documented
Numerous studies have shown a disturbing correlation between the slick propaganda videos' shocking acts of violence, and western popular culture references, such as video games and Hollywood movies. These videos are often disseminated on YouTube.
Apart from Twitter, Facebook, Instagram, and Ask.fm are all used to boast about savage attacks, to post imagery glorifying life in the Islamic State, and to make contact with young people in the West to encourage them to make the journey. There is no doubt that this huge social media reach has played a key part in establishing the appeal of Isis to its sympathisers across the world.
Where does this leave social media companies such as Twitter and Facebook? They do not take a uniform approach. Some technology executives worry that deleting posts too quickly could lead to frequent and unnecessary demands for cancellation by the authorities.
Twitter in particular has positioned itself as a defender of free speech. It doesn't actively police the site, except for images of child sexual exploitation, and it is reluctant to censor users. Twitter's own "transparency report" shows that it didn't honour any of the 25 requests made by the American government and law enforcement to remove posts in the first six months of 2015.
The company says it agreed to 42% of the 1003 removal requests made by governments and courts worldwide during that period. (A third were from Turkey, not known for its support of free speech). Twitter only takes steps such as suspending an offending account after a user reports a violation.
This laissez faire attitude prompted a heated debate in 2014 after images showing the beheading of James Foley by Isis militants were widely disseminated online; Twitter eventually took action to rid the site of these images and videos. By contrast, Facebook aggressively works to shut down any profile, page, or group related to a terrorist organisation, to the extent that Isis-related videos don't appear as frequently as they do on other social media networks.
The limits of free speech
The problem with policing these spaces on the internet goes beyond the logistics of doing so (companies are heavily reliant on users reporting objectionable content, and there is nothing to stop a banned user from starting up a new account), towards deeper questions over freedom of speech. A blanket policy of banning anything that incites violence could be seen as censorship. The case of Isis propaganda may seem clear-cut, but if the precedent is set, could that ultimately lead to other kinds of legitimate political discussions being banned?
These are the same arguments that have been rehearsed for decades around the world about the limits of free speech: the internet is now a forum where much of our day-to-day speech happens, so these debates must be transferred to the digital space. Internet companies, faced with the dissemination of terrorist propaganda on their networks, are struggling to establish where the line is. It isn't a new problem; YouTube has struggled with the issue since the earliest Al-Qaeda videos began to be uploaded well over a decade ago.
Amid this discussion, it is important to remember that the internet is not the be-all and end-all of Isis. Yes, YouTube and Twitter are useful tools for terrorist groups, used to hideous effect by Isis, but they are just tools. They do not create the ideology or propaganda disseminated on their platforms. What Fields' lawsuit might do is focus attention on the vital debate of what responsibility does lie with social media companies, and what they could reasonably do to mitigate the spread of hate speech and terrorist messaging on their platforms.