- Joined
- Aug 20, 2022
- Messages
- 16,287
- Points
- 113
Porn on social media disguised as educational content; parents’ role key in protecting kids: Experts
One 15-year-old boy said he stumbled upon X-rated content on Instagram while researching a school project.PHOTO: REUTERS
Vihanya Rakshika
UPDATED FEB 09, 2025, 08:22 PM
SINGAPORE - When one Singaporean mother noticed that her son was sneaking into the bathroom after midnight with his mobile phone for long periods, she confronted him – only to find that he was watching pornography on a social media platform.
The 52-year-old accountant, who asked to be identified only as Madam Ng due to privacy concerns, said her son’s behaviour had raised red flags for weeks until she caught him in November 2024.
It turned out that the teen and his friends were sharing links to explicit content on Instagram.
“He promised he would never again look for such obscene content online,” Madam Ng told The Straits Times.
The 15-year-old boy said he had stumbled upon the X-rated content on Instagram while researching a school project, as it was tagged under supposedly educational hashtags related to breastfeeding and health.
Checks by ST between Dec 15 and Dec 31, 2024, showed that content creators on social media platforms such as YouTube, Instagram and Facebook have found ways to bypass content moderation, which can involve a mix of artificial intelligence (AI) software and human reviewers.
Experts whom ST approached say there is a need for clearer content guidelines, stronger human oversight, and greater parental involvement to better protect children.
Education and instruction-related hashtags
During the two-week period in December 2024, ST found more than 20 explicit videos on YouTube and more than 30 such videos on Instagram and Facebook.These included clips of women pretending to breastfeed baby dolls and exposing their bodies. Such videos were accompanied by education- or instruction-related hashtags and captions.
Links in the social media profiles that posted the videos, as well as captions in the clips, often point people to OnlyFans accounts or websites with explicit content.
OnlyFans is a subscription-based platform where creators can share exclusive content, often adult-oriented, for paying subscribers.
In a media reply, the Infocomm Media Development Authority (IMDA) said it is actively trying to minimise Singapore users’ exposure to harmful content online.
In 2023, the authority issued a Code of Practice for Online Safety, which requires social media platforms to implement community guidelines, effective content moderation and reporting mechanisms to protect users, particularly children, it added.
App stores to screen age of S’pore users to block kids from accessing apps for adults‘She was trying to hide it’: Online harms lead to some parents wanting social media ban
IMDA said users can report harmful content to the platforms, which must act swiftly and remove offensive and banned content.
In a media reply, a YouTube spokesman said “the platform does not allow explicit content meant to be sexually gratifying”.
The spokesman said: “We’ve carefully reviewed the flagged content and taken appropriate action, including removing videos and playlists, and terminating accounts.”
YouTube uses AI tools, such as machine learning, to identify and flag harmful content, automatically removing videos similar to those that have previously been deleted. Content that is not automatically removed is flagged for human review.
According to YouTube’s Transparency Report, the popular platform removed over nine million videos globally from July to September 2024, with more than 480,000 videos flagged for violating its nudity and sexual content policy.
Meta said on its website that it uses AI tools to keep its platforms free of smut, detecting and removing harmful content before it is reported.
When needed, human reviewers assess the most harmful material. Teen accounts are also automatically set to the strictest content controls.
AI and human content moderation
Professor Bo An, an AI expert, said the broad categorisation of “educational” material is a key loophole.“AI tools, while effective, often struggle to understand the nuanced context required to distinguish legitimate educational material from harmful content designed to bypass moderation,” said Prof An, the AI division head at NTU’s College of Computing and Data Science.
Clearer content moderation guidelines, hiring specialised human moderators and fine-tuning AI systems with human-labelled data could help improve detection accuracy, he added.
Professor Simon Chesterman and Associate Professor Tamas Makany both agreed that AI content moderation tools should be accompanied by effective human oversight.
“AI doesn’t feel or understand the worries we have for our children – so we should not hope that social media companies will develop one for them,” said Prof Makany, who teaches communication management at Singapore Management University.
Gen Z seeks safety above all else amid constant crisis and existential threatsOnline harms: Can app stores be effective gatekeepers?
How parents can protect their children
Parents can also use AI-powered content filtering software such as Net Nanny, Bark and Qustodio to monitor children’s social media activity, block inappropriate content and provide alerts, said Prof An.“However, these tools work best when combined with parental involvement and trust-based relationships where children feel comfortable discussing online encounters,” he added.
Echoing this sentiment, Calming Hearts Counselling clinical director Caroline Ho said: “Parents must approach this issue with empathy, fostering open communication about boundaries, relationships and responsible online behaviour.”
Exposure to pornography during adolescence could distort perceptions of intimacy and sexuality, according to Ms Ho, with about 70 per cent of her patients with porn addiction having been introduced to pornography as teens.
Prof Chesterman, vice-provost for educational innovation at NUS, said the best way to protect children might be to limit their access to social media, particularly for younger users, a view supported by counsellor Lisa Oake.
“Would you let your kids play on an expressway with trucks speeding by? Allowing children unfettered online access can be just as destructive,” said Ms Oake, who is the founder of Executive Counselling.
“Set clear boundaries, monitor their activities, and don’t hesitate to implement tough rules like requiring access to passwords.”