TikTok has been found to direct children’s accounts to pornographic content after only a few clicks, according to a report by the campaign group Global Witness. The researchers created fake accounts using a 13-year-old’s birth date and turned on TikTok’s “restricted mode,” which is meant to limit exposure to sexually suggestive content.
Despite these safeguards, TikTok suggested sexualised and explicit search terms through its “you may like” feature. The recommended searches included phrases like “very very rude skimpy outfits” and “very rude babes,” escalating quickly to more explicit terms such as “hardcore pawn [sic] clips.” For some accounts, these suggestions appeared immediately after logging in.
Within a small number of clicks, researchers encountered pornographic content, ranging from partial nudity to explicit sex. Global Witness noted that much of this content attempted to evade moderation by embedding it within seemingly innocent videos or images. In one account, researchers accessed pornography with just two clicks: one on the search bar and one on a suggested search.
Global Witness conducted tests both before and after the UK’s Online Safety Act (OSA) came into effect on 25 July, which requires tech companies to protect children from harmful content, including pornography. The report also found that two videos appeared to feature someone under 16, which has been reported to the Internet Watch Foundation.
The campaign group claimed TikTok may be in breach of the OSA, which obliges companies to prevent children from encountering harmful material. Ofcom, the UK regulator responsible for enforcing the act, said it would review the research findings. Under Ofcom’s rules, platforms that pose a medium or high risk of showing harmful content must configure algorithms to filter out such material from children’s feeds. TikTok’s guidelines explicitly ban pornography.
After being contacted by Global Witness, TikTok said it removed the offending videos and improved its search recommendations. A spokesperson said the platform acted immediately to investigate the claims, remove violating content, and make updates to prevent further exposure.