Illustration shows TikTok app logo
TikTok slapped with £12.7 million fine Reuters

With its uncannily astute algorithm and short-form videos designed to keep users scrolling, it's not surprising that TikTok is one of the world's most used - and loved - social media platforms. And, ostensibly, the video-sharing site is fairly innocuous.

However, on April 4, the Information Commissioner's Office (ICO) fined TikTok £12.7 million for processing the data of up to 1.4 million children under 13 who, between May 2018 and July 2020, were using the platform without parental consent, in breach of UK data protection laws which stipulate that organisations using children's personal data must first obtain the consent of their parents or guardians.

The ICO has accused TikTok of failing to check who was using the platform and remove underage users - despite internal warnings from senior staff at TikTok, as its own terms of service do not allow those under 13 to hold an account. The ICO stated that "TikTok did not respond adequately" to these concerns.

John Edwards, the UK Information Commissioner, said: "TikTok should have known better. TikTok should have done better. Our £12.7m fine reflects the serious impact their failures may have had. They did not do enough to check who was using their platform or take sufficient action to remove the underage children that were using their platform."

In September 2022, the ICO issued TikTok with a notice of intent, threatening the Chinese-owned video-sharing app with a £27 million fine. However, the ICO chose not to pursue allegations concerning the illegal use of special category data - data containing explicitly personal information - and so reduced the ultimate fine.

In 2019, TikTok received a $5.7 million fine from the US Federal Trade Commission - also for illegally using the data of minors under 13 years.

Kingsley Hayes, the Head of Data and Privacy Litigation at Keller Postman UK, commented on the fine. He stated: "TikTok abjectly failed to protect British children and their data. TikTok knew that kids aged under 13 were accessing its app, but it simply didn't take adequate steps to prevent this. This meant young kids could access content which may not have been appropriate for them. The ICO is right to have fined the company for failing to protect young children."

Kingsley further added: "Over a million young British kids were failed by TikTok in two ways. Firstly, the data collected may have been used to show them harmful, age-inappropriate content. Secondly, data about their preferences, browsing habits and personal profiles were collected and processed without parental consent."

Children recommended dangerous content within minutes

The fine follows a December 2022 report from the Center for Countering Digital Hate (CCDH), where accounts presenting as 13-year-olds were recommended content involving suicide within 2.6 minutes and content involving eating disorders within 8 minutes. Certain accounts deliberately posed as individuals vulnerable to eating disorders - and these accounts were shown 12 times as much content relating to suicide and self-harm than the 'standard' accounts.

The study also found that content concerning eating disorders had garnered 13.2 billion views over 56 hashtags designed to eschew moderation.

Further analysis by the CCDH found that between November 2022 and January 2023, only seven of the 56 hashtags flagged by their research had been removed, and the content had amassed another 1.6 billion views.

These new findings prompted various charities and advocacy groups, such as the NSPCC and The American Psychological Association, to write to TikTok's Head of Safety demanding action.

TikTok vs. the Government(s)

Accusations that TikTok is allowing harmful content to flow unimpeded into the 'For You Pages' of vulnerable children are coming amidst a maelstrom of both actual and potential bans. Deteriorating relations between China and the West intensify suspicions regarding TikTok's ties to the Chinese-government as the app is increasingly perceived as a security threat.

On April 4, Australia became the latest country to ban the app from government devices, following similar moves by the US, UK, Canada, New Zealand, France, Denmark and others in recent weeks. The US is also considering a total ban.