November 2015 Paris attack
Tech firms sued for providing extremists with material support Reuters

Tech majors Google, Facebook and Twitter are facing a lawsuit filed by Reynaldo Gonzalez whose daughter Nohemi was among the 130 people killed in the November 2015 Paris attacks. Gonzalez has accused these companies of providing "material support" to extremists.

"For years, [the companies] have knowingly permitted the terrorist group ISIS to use their social networks as a tool for spreading extremist propaganda, raising funds and attracting new recruits," states the lawsuit filed at US District Court Norther District of California.

"This material support has been instrumental to the rise of ISIS, and has enabled it to carry out numerous terrorist attacks, including the 13 November 2015 attacks in Paris, where more than 125 were killed, including Nohemi Gonzalez," reads the court papers.

Gonzales was of the opinion that without Twitter, Facebook, and YouTube, the "explosive growth of ISIS over the last few years into the most-feared terrorist group in the world would not have been possible".

However, these tech firms are at work to combat extremist content shared online. From mid-2015 onwards Twitter has suspended more than 125,000 accounts for "threatening or promoting terrorist acts," related to the Islamic State (Isis).

US laws exempt internet companies from liability for content posted on their sites, suggests a BBC report. According to Section 230 of the 1996 Communications Decency Act, "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider".

The companies in statements issued to the Associated Press said the case was without merit, while citing their policies against extremist material. Twitter said it had "teams around the world actively investigating reports of rule violations, identifying violating conduct, and working with law-enforcement entities when appropriate".

Facebook said if it saw "evidence of a threat of imminent harm or a terror attack", then it would contact the authorities. Google noted it had "clear policies prohibiting terrorist recruitment and content intending to incite violence and quickly removed videos violating these policies when flagged by our users".