Discord
Discord has been named in 5 documented digital harm incidents, including 2 fatalities and 3 involving minors. The most common harm domain is Child Safety, followed by Addiction & Mental Health.
Documented Incidents
5Florida opens investigation into Discord over child safety failures and predator access
Florida is investigating the Discord app over child safety concerns, following reports of abductions and grooming. The investigation, led by Florida Attorney General James Uthmeier, claims the app puts children at risk by allowing predators to access young users. Discord is marketed as a communication platform for young people, similar to Facebook or Instagram, and is used by millions, including Gen Z users for gaming and social interaction. The state has issued subpoenas for marketing and promotional documents related to Discord, as well as other platforms like TikTok and Roblox. A 2022 safety message from Discord states the app includes tools to help users avoid inappropriate content or unwanted contact. The investigation is part of a broader push by Florida to address online safety risks for children.
Concerns Over Sexual Grooming and Explicit Content on Roblox
Parents have raised concerns about children being exposed to sexually explicit material and groomed on Roblox. A father discovered his 14-year-old son was having explicit conversations with someone claiming to be a 16-year-old in Vietnam. Another teenager reported being targeted by predators on the platform.
Individuals Form Support Group After Emotional Dependence on AI Chatbots
Allan Brooks and James developed emotional attachments to AI chatbots, believing them to be sentient, which led to severe mental health issues including suicidal thoughts and hospitalization. They later joined a peer support group called the Human Line, which includes others who have experienced similar issues with AI interactions. The incident highlights the growing concern around the psychological impact of AI chatbots and the need for community-based support.
Lost in a toxic online world, depraved teen who killed his mother with hammer after chilling AI chat
Tristan Roberts, an 18-year-old with autism and ADHD, killed his mother, Angela Shellis, in Prestatyn, North Wales, on October 24. Roberts, who was deeply involved in violent online communities and had expressed misogynistic views on Discord, became fixated on blaming his mother for his personal struggles. He used an AI tool to seek advice on how to commit the murder, which he carried out using a hammer purchased online. The attack lasted over four hours and was recorded by Roberts. He was later arrested at his home and sentenced to life in prison. The case has raised concerns about the influence of online platforms and AI in facilitating violent acts.
Reddit bans AI-generated celebrity deepfake porn communities
In February 2018, Reddit banned two communities, r/deepfakes and r/deepfakeNSFW, which hosted AI-generated pornographic content featuring celebrities without their consent. The move was part of a broader trend, with platforms like Pornhub, Discord, and Twitter also taking action against involuntary pornography. Reddit updated its policies to prohibit the creation and sharing of involuntary pornography and the sexualization of minors.