Making the case for a child rights approach to AI
A report by LSE’s Digital Futures for Children centre argues for a child rights-based approach to AI regulation in the UK. It highlights how AI is intensifying risks such as grooming, sextortion, and AI-generated child sexual abuse material. The report also discusses the risks of AI chatbots in emotional support scenarios and algorithmic biases that may reinforce harmful stereotypes. It calls for integrating children’s rights into the core design of AI systems to prevent exploitation and protect children in digital environments.
Related Incidents
Same harm domain, actors and location may differ
14-year-old girl groomed via social media by Sydney private school teacher leading to child abuse material charges
12-year-old girl sexually groomed via TikTok leading to out-of-state assault in Binghamton, New York
9-year-old girl dies after attempting blackout challenge on YouTube
13-year-old Louisiana girl exposed to AI-generated nude deepfake images leading to expulsion and federal lawsuit against school district
12-year-old girl groomed and coerced into self-harm and producing child sexual abuse materials via social media in New Jersey
Related Legislation
Other policies covering the same harm domain