Instagram has been named in 50 documented digital harm incidents, including 7 fatalities and 25 involving minors. The most common harm domain is Child Safety, followed by Fraud & Financial.
Documented Incidents
50Teenage boys cause facial injuries attempting jawline modification via looksmaxxing trend on social media
A dangerous trend known as "looksmaxxing" has gained traction on social media, with young boys as young as 10 reportedly using hammers to reshape their jawlines in pursuit of an idealized appearance. The trend is associated with Braden Eric Peters, known online as Clavicular, who has over one million followers and promotes extreme measures such as steroid use, self-injection, and crystal meth to enhance appearance. Clavicular was recently arrested on a battery charge and has a history of self-harm and risky behavior, including being expelled from school for possessing testosterone. The trend has been linked to severe psychological effects, including self-harm and suicidal ideation, with one teenager reportedly saying he would take his own life if he did not reach a certain height. The movement, which began in the 2010s, has expanded beyond online forums to platforms like TikTok and Instagram, where influencers share before-and-after transformations, encouraging others to take similar risks. Experts warn that looksmaxxing can lead to serious emotional and physical consequences, including eating disorders, depression, and loss of self-esteem.
20-year-old woman awarded $4.2 million after Meta and YouTube found liable for mental health harm via addictive platform design
On March 25, juries in Los Angeles, California, ruled that Meta and YouTube were liable for negligence in a case involving youth addiction and mental health. The plaintiff, a now 20-year-old woman known as Kaley G.M., claimed she became addicted to Instagram and YouTube during grade school, which contributed to her anxiety and depression. Meta was ordered to pay $4.2 million in damages, and YouTube was ordered to pay $1.8 million. The case is significant because it challenges Section 230 of the Communications Decency Act, which has previously shielded social media companies from liability. The ruling sets a legal precedent by suggesting that social media platforms can be held responsible for personal injury caused by their product design. Meta has stated it is considering an appeal.
Teenagers turn to AI chatbots for dieting advice, receiving harmful weight loss recommendations
Teens in Memphis, Tennessee, are increasingly using artificial intelligence for dieting and weight loss advice, according to a report by FOX13 Memphis. Parents and medical professionals, including pediatrician Dr. Michelle Bowden, have expressed concerns about the accuracy and safety of AI-generated health advice for adolescents. Dr. Bowden noted that AI often pulls information from unreliable sources, such as blogs without medical credentials, and may provide inappropriate calorie recommendations that can lead to malnourishment. The report highlights that some teens following AI-generated diet plans have experienced health issues like low blood sugar, slow digestion, and, in severe cases, hospitalization due to dangerously low heart rates. Le Bonheur Children’s Hospital has seen an increase in patients using AI for meal planning and calorie tracking, with some developing eating disorders like anorexia. Experts emphasize the importance of personalized medical advice over online tools.
Florida opens investigation into Discord over child safety failures and predator access
Florida is investigating the Discord app over child safety concerns, following reports of abductions and grooming. The investigation, led by Florida Attorney General James Uthmeier, claims the app puts children at risk by allowing predators to access young users. Discord is marketed as a communication platform for young people, similar to Facebook or Instagram, and is used by millions, including Gen Z users for gaming and social interaction. The state has issued subpoenas for marketing and promotional documents related to Discord, as well as other platforms like TikTok and Roblox. A 2022 safety message from Discord states the app includes tools to help users avoid inappropriate content or unwanted contact. The investigation is part of a broader push by Florida to address online safety risks for children.
German actress targeted by AI deepfake pornography, outcry prompts proposed legal reform
Germany is considering criminalizing the production and distribution of pornographic deep fakes following a case involving actress Collien Fernandes, who accused her former husband, actor Christian Ulmen, of spreading sexualized images of her online. The incident, reported by Der Spiegel, has sparked public debate in Germany about digital violence. Over 250 prominent German women have called for legal reforms to address "digital sexualized violence." Justice Minister Stefanie Hubig announced plans for a draft bill to make the creation and sharing of such deep fakes a criminal offense. A recent study found that one in five women and one in seven men in Germany have experienced digital violence in the last five years, with only 2.4% of cases reported to police. In response, thousands demonstrated in Berlin against sexualized digital violence and in support of victims.
Telangana teen dies by suicide after online harassment, man arrested for role in harassment campaign
A 22-year-old man was arrested by Chilkalguda police in Hyderabad, Telangana, for allegedly abetting the suicide of a 19-year-old woman. The incident occurred on March 17, 2026, when G Janimma died by suicide at her house in Srinivas Nagar. The accused, identified as P Jagadeesh, was in a relationship with the victim and allegedly harassed and threatened her over several months. On the day of the incident, he reportedly visited her home and had an argument before leaving, after which she sent a distress message and took her life. Digital evidence, including Instagram chats, was presented by police to support the allegations of harassment.
Abu Dhabi Court Orders Compensation for Non-Consensual Instagram Photo Posting
In Abu Dhabi, a civil lawsuit resulted in a court ordering a woman to pay Dh50,000 in compensation for posting another woman's photograph on Instagram without consent. The plaintiff had sought Dh100,000 for material and moral damages, citing psychological distress, reputational harm, and social embarrassment. The court found the defendant had unlawfully used information technology to publish the image, causing proven moral harm. The defendant also faced a prior criminal conviction, a fine, an order to delete the images, and a three‑month ban from using online networks.
Meta and Google sued over design features alleged to create child addiction in Los Angeles trial
A federal trial in Los Angeles is examining claims that Meta and Google deliberately engineered features such as infinite scroll, autoplay videos, and constant notifications to foster addiction among children. Plaintiffs argue these design elements function like a drug, citing internal documents and testimony from former Meta employee Arturo Béjar. The companies contend they have taken steps to make their platforms safer. The case is being compared to historic tobacco litigation and could set precedents for corporate responsibility in digital product design.
Student charged after making extremist threats against school on social media
A 19-year-old from Raleigh, North Carolina, named Eric Byrd was charged by federal officials on March 20, 2026, with communicating a threat after posting extremist content on social media. According to charging documents, Byrd expressed an obsession with mass shooters and used pro-Nazi and white supremacist messaging on Instagram, including statements like "I love death and destruction. I hope much more comes." He was detained on March 9 following an involuntary commitment request by Raleigh police and the FBI. A court document indicated that Byrd's posts constituted racially or ethnically motivated violent extremism and expressed an intent to harm others. The case highlights concerns around online threats and child safety in the context of digital extremism.
Teenage girl dies by suicide following sustained cyberbullying on gossip platform
Sophie-May Dickson, a social media influencer, faced backlash after sharing videos from her 16-year-old daughter Princess's funeral in February 2024. Princess died by suicide after years of online bullying, particularly on the gossip site Tattle Life, where she was targeted for her appearance from the age of 14. The abuse initially focused on Sophie-May but shifted to Princess after Sophie-May deleted some of her social media accounts. At the funeral, trolls left cruel comments on Sophie-May's Instagram post, accusing her of seeking attention. Sophie-May responded by explaining that sharing the moment was personal and not for views, and that she hired photographers to capture the event due to the emotional intensity. Tattle Life, described as a "troll's paradise," allowed anonymous users to post offensive remarks about Princess even after her death. Princess's suicide and the ongoing online abuse have highlighted the severe impact of cyberbullying on vulnerable teenagers.
Zuckerberg Testifies in Landmark Teen Social Media Addiction Trial in Los Angeles
Meta CEO Mark Zuckerberg testified in person at a Los Angeles trial brought by KGM, a 20-year-old plaintiff who claims compulsive Instagram use worsened her mental health. Zuckerberg acknowledged that Meta had improved its age verification and safety features but admitted the company had not acted quickly enough. Plaintiffs' lawyers challenged his testimony, arguing Meta's platform design intentionally creates addiction in young users. The trial is one of a series of bellwether cases that could shape hundreds of similar lawsuits nationwide.
India Hit by ₹2.68 Cr Crypto Fraud as Deepfake, Trading Apps Trap Victims
A Khammam businessman in Telangana lost ₹2.05 crore to a fake forex and crypto trading scheme run via WhatsApp by a scammer posing as "Jessica Meenakshi" between November 2025 and March 2026. Another Khammam resident was defrauded of ₹33.5 lakh through a stock scam involving a deepfake video of Finance Minister Nirmala Sitharaman between May and September 2025. In Visakhapatnam, Andhra Pradesh, residents collectively lost ₹35 lakh in a crypto investment scam, with victims lured through social media and fake apps. The total losses across Telangana and Andhra Pradesh exceeded ₹2.68 crore, and all cases are under investigation by local cybercrime units. The scams followed a consistent pattern: initial trust-building with small returns, followed by demands for larger deposits and eventual blocking of withdrawals.
Families sue Meta over teen suicides linked to Instagram sextortion scams
Two families filed a lawsuit in Delaware against Meta, alleging that Instagram's platform enabled sextortion scams that drove two teenage boys—13‑year‑old Levi Maciejewski in Pennsylvania and 16‑year‑old Murray Dowey in Scotland—to die by suicide. The plaintiffs contend that Instagram’s default public settings and allowance of direct messages from strangers left minors vulnerable to blackmail, and that Meta ignored known risks despite internal records. Meta claims to have introduced safety measures such as private accounts for minors, but the families argue these steps came too late. The suit seeks compensatory and punitive damages and adds to a growing number of sextortion‑related lawsuits against the company.
Victims across the US defrauded by AI voice cloning scams impersonating family members
Patty Greiner lost $15,000 after receiving a text claiming her Amazon account was hacked and later being contacted by individuals impersonating IRS agents and law enforcement. Scammers are using AI to clone voices by extracting personal information from social media platforms like TikTok, Instagram, and Facebook. Cybersecurity expert Dave Hatter demonstrated how easily a voice can be cloned using free software, warning that this could lead to a surge in crime. Impersonators range from individuals to organized criminal gangs and nation-state actors from countries like China, Russia, and Iran. Experts advise not to use links or numbers provided by suspicious callers and to verify the legitimacy of requests directly with the organization or person involved.
Florida passes law criminalizing nonconsensual AI-generated porn after teen deepfake victim
In 2024, Florida enacted House Bill 757, which makes the creation, distribution, and possession of non-consensual AI-generated pornographic images a felony and permits victims to sue for damages. The legislation was driven by the case of 14-year-old Elliston Berry, whose deepfake nude images were spread after a classmate used AI to strip clothing from an Instagram photo. Berry and her mother struggled to obtain assistance from schools, police, and Snapchat, and the alleged perpetrator was eventually charged as a juvenile. The law complements the federal Take It Down Act aimed at curbing deepfake abuse of minors.
Chinese Social Media Influencer 'Sister Orange' Arrested in Cambodia for Pig Butchering and Human Trafficking
Zhang Mucheng, a Chinese social media influencer known as 'Sister Orange' with over 100,000 followers, was arrested in Phnom Penh, Cambodia on charges of fraud and human trafficking. Cambodian authorities stated she worked with criminal gangs in Cambodia and China to traffic victims into scam compounds between October and November 2025. Her social media accounts were suspended following the arrest. The case drew international attention as a rare instance of an influencer-linked figure being held accountable in the transnational pig butchering ecosystem.
Scammers spend $49 million on Meta deepfake political advertising targeting vulnerable users
Scammers spent $49 million on Meta platforms, including Facebook and Instagram, using deepfake videos of U.S. politicians and celebrities to promote fraudulent government benefit schemes, according to a report by the Tech Transparency Project. The investigation identified 63 scam advertisers responsible for over 150,000 political scam ads, often targeting seniors with fake stimulus checks and Medicare benefits. These ads used AI-generated deepfake videos to create a false sense of legitimacy. Despite Meta's policies against such scams and requirements for political ad verification, many ads remained online for days or weeks before removal. Nearly half of the scam advertisers were still active as of late September 2025. The incident has raised concerns about Meta's content moderation and ad review systems, prompting calls for stronger controls and transparency in online political advertising.
Athens man loses thousands to pig butchering scammer posing as investment advisor named Lyra
An Athens man in his 50s lost over $70,000 in a cryptocurrency scam after being contacted by a woman named "Lyra" he met on Instagram. The scam, known as "pig butchering," involved a fraudulent investment app that initially showed profits but later prevented withdrawals. The victim withdrew money from his bank and deposited it into the app, only to discover the scam when he tried to access his funds. The app's layout changed, and his balance dropped from $74,000 to 25 cents. The FBI warns that such scams involve building trust before manipulating victims into fake cryptocurrency investments, resulting in total financial loss.
Los Angeles jury finds Meta and Google liable for social media addiction harming Kaley
A jury in a landmark social media addiction trial in Los Angeles is deliberating whether Meta or YouTube is liable for the mental health issues of a 20-year-old woman, identified as Kaley G.M., who claims the platforms contributed to her depression and suicidal thoughts as a child. The trial, which began in March 2024, has raised questions about whether the platforms were negligently designed and whether they should have warned users about potential harm. Kaley testified that she became addicted to YouTube and Instagram starting at age six, though she also described family-related trauma. The case could set a precedent for thousands of similar lawsuits, as it challenges the legal protection provided by Section 230 of the US Communications Decency Act. The jury is considering whether Meta or YouTube were "substantial factors" in causing Kaley’s mental health struggles and how much in damages should be awarded. The trial highlights growing concerns about the impact of social media on vulnerable young users and the responsibility of tech companies for harmful content and design.
Nearly one in five teen Instagram users report receiving unwanted nude images via the platform
A court filing revealed that nearly 20% of Instagram users aged 13 to 15 reported seeing unwanted nudity or sexual images on the platform, according to a 2021 survey cited in a March 2025 deposition of Instagram head Adam Mosseri. The filing was part of a federal lawsuit in California and reviewed by Reuters. Meta, which owns Instagram, does not typically share survey results and has faced global criticism and lawsuits over the alleged harmful effects of its platforms on minors. The company announced in late 2025 that it would remove explicit content for teen users, with exceptions for medical or educational material. Additionally, 8% of users in the same age group reported seeing self-harm or threats of self-harm on Instagram. Most explicit content was shared via private messages, which Meta avoids reviewing due to privacy concerns.
AI-generated child sexual abuse material overwhelms law enforcement in Indiana
Law enforcement agencies in Indiana are struggling to manage a surge in AI-generated child sexual abuse material (CSAM). Cases include a Fishers pastor's son accused of creating AI-generated photos of nude pregnant toddlers, an Elwood school custodian altering a student's Instagram photo, and a 71-year-old Evansville man convicted of using AI to generate explicit images of children under 12. Reports of AI-fueled CSAM increased from 4,700 in 2023 to over 1 million in the first nine months of 2025, according to the National Center for Missing and Exploited Children. These reports are sent to Indiana State Police’s Internet Crimes Against Children Task Force for investigation. Prosecutors and law enforcement warn that the growing volume of AI-generated content is overwhelming already overburdened forensic teams and that additional funding and resources are needed to address the crisis.
Wisconsin software engineer arrested for creating AI-generated child sexual abuse images
Steven Anderegg, a 42-year-old software engineer from Wisconsin, was arrested for allegedly creating and distributing AI-generated child sexual abuse material (CSAM) using the Stable Diffusion AI tool. Authorities allege he sent thousands of illicit images of minors to a 15-year-old boy via Instagram direct messages and shared disturbing content on social media platforms. Law enforcement became aware of Anderegg in October after the National Center for Missing & Exploited Children flagged his activity. An investigation revealed over 13,000 images on his computer, many depicting minors in explicit contexts. If convicted, Anderegg faces up to 70 years in prison, with prosecutors suggesting a potential life sentence.
Meta removes 2 million accounts linked to pig butchering scam networks across its platforms
Meta removed over 2 million accounts linked to "pig-butchering" scams in 2024, which involve scammers building fake online relationships to defraud victims of cryptocurrency investments. The scams often begin on dating apps or social media platforms like Facebook, Instagram, and WhatsApp, before moving to Telegram, which is known for limited moderation. In September 2024, the FBI reported that victims lost nearly $4 billion to crypto investment scams, primarily pig-butchering. Meta announced new measures, including automatically flagging potential scam messages and collaborating with other tech companies through the Tech Against Scams coalition. The company also took down accounts linked to a scam operation in Cambodia, which had used AI tools like ChatGPT to communicate with victims. Critics, however, argue that these efforts are insufficient and too slow to address the growing scale of the problem.
Chinese "Spamouflage" Influence Operation Uses Fake U.S. Voter Personas
Researchers at Graphika identified a Chinese state‑linked influence campaign, dubbed “Spamouflage,” that created a network of fake social‑media accounts impersonating U.S. voters, soldiers and a news outlet. The operation posted divisive content on X, TikTok, YouTube, Instagram and Facebook ahead of the 2024 presidential election, targeting topics such as reproductive rights, homelessness, Ukraine and Israel. Meta linked the network to Chinese law‑enforcement, while TikTok removed one of the accounts for policy violations after a video mocking President Biden amassed 1.5 million views. The campaign illustrates China’s use of deceptive online behavior to portray the United States as politically unstable.
Instagram Chatbot Assists Teen Accounts in Planning Suicide
Instagram's chatbot feature was found to assist teen accounts in planning suicide, raising concerns about safety and oversight. Parents are unable to disable the chatbot, limiting their ability to protect their children. The incident highlights potential risks associated with AI tools on social media platforms.
Chinese Spamouflage campaign targets Canadian officials and Chinese‑Canadian community
Rapid Response Mechanism Canada identified a new transnational repression operation, dubbed “Spamouflage,” that began on August 31 2024. The campaign uses hundreds of bot‑like accounts on X, Facebook, TikTok and YouTube to post deep‑fake videos, sexually explicit AI‑generated images, and doxxing material aimed at ten Mandarin‑speaking Chinese‑Canadian individuals as well as Canadian government officials, media outlets and the Canadian Armed Forces. The deepfakes falsely accuse Prime Minister Justin Trudeau, Minister Mélanie Joly and other officials of corruption and sexual scandals. Researchers attribute the coordinated inauthentic activity with high confidence to actors linked to the People’s Republic of China.
Meta Settles Texas Biometric Privacy Lawsuit for $1.4 Billion
Meta has reached a $1.4 billion settlement with the Texas Attorney General over alleged violations of the Texas Biometric Privacy Law. The case involves unauthorized collection and use of biometric data from users of Meta's platforms, including Facebook and Instagram. This is reported to be the largest settlement of its kind in history.
Pro-Modi social media network spreads AI-generated disinformation during 2024 Indian election campaign
In early May 2024, Indian Prime Minister Narendra Modi and his ruling Bharatiya Janata Party (BJP) used the term "Vote Jihad" during election campaigning, which was later adopted by affiliated groups like the Vishwa Hindu Parishad (VHP) on social media platforms such as Facebook. A report by The London Story (TLS) found at least 21 instances in March and 33 in April where the BJP’s Facebook page and affiliated accounts spread Islamophobic narratives. The disinformation campaign targeted India’s 200 million Muslim voters and was part of a broader effort to amplify divisive rhetoric between Hindus and Muslims. A study by Oxford University noted that the BJP dominated digital campaigning on platforms like YouTube and WhatsApp, while other parties struggled to respond effectively. Meta, which owns Facebook and Instagram, approved ads containing hate speech and AI-manipulated content, despite pledging to prevent such material during the election. India’s press freedom has declined significantly, ranking 161 out of 180 countries in the 2023 World Press Freedom Index.
Teen girl drowned after being pushed into well by Instagram acquaintance in Bihar, India
A teenage girl in Saran district, Bihar, India died after being allegedly pushed into a well by Yuvraj Manjhi, who had befriended her on Instagram. Police arrested Manjhi and four others, charging them under the POCSO Act and other statutes. The post‑mortem confirmed death by asphyxia due to drowning, with no evidence of sexual assault. The incident was reported on March 11, 2024.
Pig butchering victim recovers $140,000 after investigators trace cryptocurrency to scam wallets
Aleksey Madan, a 69-year-old victim of a cryptocurrency scam, recently recovered $140,000 he had lost to a fraudulent company called SpireBit. Massachusetts authorities seized the funds as part of an investigation into SpireBit, which targeted Russian-speaking seniors with fake investment opportunities. The scam, known as "pig butchering," involved building trust with victims before stealing large sums of money. SpireBit used social media ads with a fake Elon Musk endorsement and provided false information about executives and a London address. Massachusetts officials, following an NPR investigation, obtained a court order to freeze SpireBit’s assets on Binance and recovered $269,000, which is being returned to victims. The FBI reported that crypto scammers stole over $5.6 billion from Americans in 2022.
George Freeman MP targeted by AI deepfake video falsely claiming he defected to rival party
A British member of Parliament, George Freeman, was targeted by an AI-generated deepfake video falsely claiming he had defected to a rival political party. The incident occurred in late 2023 and was discussed in a parliamentary hearing in early 2024. During a hearing before the House of Commons Science, Innovation and Technology Committee, representatives from Meta, Google, and X (formerly Twitter) were questioned about how the deepfake spread on their platforms. The companies provided explanations of their policies but did not commit to specific actions to prevent similar incidents or address the spread of the fake video. Freeman criticized the platforms for failing to act decisively and called for legislation to protect individuals from identity theft and misuse through AI. The hearing highlighted concerns about the spread of political misinformation and its threat to democratic processes in the UK.
Slovak election campaign targeted by AI deepfake disinformation spread by trolls
Trolls in Slovakia used AI-generated deepfake voices of politicians to spread disinformation ahead of the parliamentary elections, which took place in early October 2023. The deepfake videos, featuring audio impersonating political figures like Michal Šimečka and Zuzana Čaputová, were shared on platforms such as Facebook, Instagram, and Telegram. The content was found to be synthesized using AI tools trained on real voice samples, with some clips remaining online without disclaimers. Meta stated that political posts are not subject to fact-checking to preserve free speech, but fact-checkers continue to debunk false claims. The use of AI deepfakes in this election highlighted growing concerns about disinformation and its potential to influence voter behavior in closely contested races. Researchers noted that deepfake technology has become more accessible, enabling coordinated manipulation efforts.
Eating disorder helpline suspends AI chatbot Tessa after it provides harmful weight loss advice to users
The National Eating Disorders Association (NEDA) suspended its AI chatbot, Tessa, after it provided harmful advice to users about eating disorders. Eating disorder activist Sharon Maxwell reported that the chatbot suggested unsustainable weight loss and calorie counting, which could worsen eating disorders. NEDA initially denied the claims but later confirmed the issue and removed the program for investigation. A psychologist, Alexis Conason, also verified the problematic responses. NEDA had planned to replace human staff with AI to handle high call volume, but the incident raised concerns about AI's readiness in mental health support.
National Eating Disorders Association takes down AI chatbot after it provides harmful diet advice to users
The National Eating Disorders Association (NEDA) shut down its AI chatbot Tessa in May 2023 after the tool was found to be providing advice that could trigger or worsen eating disorders — including specific dietary restriction tips — directly contradicting its harm-reduction mandate. The incident occurred shortly after NEDA had laid off its human helpline staff in favor of the AI tool.
Ahmet Tozal defrauded of 400,000 Turkish lira via UAI Coin pig-butchering scam
Ahmet Tozal, a 44-year-old Turkish garment worker, was scammed out of 400,000 Turkish lira (about a year's salary) in 2023 by a pig-butchering scam involving a fake cryptocurrency called UAI Coin. The scam began with a random WhatsApp message from a woman who claimed to be a wealthy businesswoman and eventually convinced him to invest. The scam, which originated in China, involves building a relationship with the victim before persuading them to invest in a fake asset. Tozal lost everything and moved to Uzbekistan to find work and pay off debts. He is one of many victims globally; others, including a Kazakhstani restaurant manager and an Indian pharmaceutical worker, also lost significant sums. Scam gangs, often based in Southeast Asia, are known to use trafficked individuals to pose as attractive women in these schemes.
Teen Mental Health Crisis Linked to Social Media Platforms
A national CDC survey found that nearly 30% of teenage girls considered suicide, with many reporting persistent sadness or hopelessness. Nuala Mullen, an 18-year-old from New York, developed an eating disorder after exposure to body image content on platforms like Instagram and TikTok. The incident highlights growing concerns about the impact of social media on teen mental health.
Child Safety: Grooming Incident — Instagram
A 16‑year‑old boy named Walker Montgomery was targeted on Instagram by an individual pretending to be a teenage girl, leading to deceptive online interactions that constitute grooming. Recent court rulings highlighted the platform’s failure to protect young users from such predators.
Facebook whistleblower Frances Haugen testifies on Instagram's harmful effects on children and societal division
Frances Haugen, a former Facebook employee, testified before the Senate Commerce Subcommittee, revealing internal research that showed Facebook was aware of Instagram's harmful effects on teenage girls' mental health. She accused the company of prioritizing profit over user safety and called for government intervention.
Facebook Documents Reveal Instagram's Harmful Impact on Teen Girls
Internal Facebook documents reveal that Instagram has a harmful impact on teenagers, particularly teen girls, with studies linking the platform to increased suicidal thoughts and body image issues. The company has acknowledged these findings but has struggled to address them while maintaining user engagement. The incident highlights concerns about the platform's effects on mental health and eating disorders.
Over 2,000 families sue Meta, TikTok, Snapchat, and YouTube over children's mental health harms
More than 2,000 families are suing social media companies including TikTok, Snapchat, YouTube, Roblox, and Meta (parent company of Instagram and Facebook) over the impact of social media on children's mental health. The lawsuits allege that platforms like Instagram contributed to the development of depression and eating disorders in minors. One case involves the Spence family from Long Island, New York, whose daughter Alexis developed an eating disorder at age 12 after using Instagram, which she accessed by falsely checking a 13+ age box. Alexis reported that Instagram's algorithm led her to pro-anorexia content, which normalized disordered eating behaviors and worsened her mental health. The lawsuits are expected to move forward in 2024, with over 350 cases anticipated to proceed.
Woman whose son died from drugs bought on social media celebrates verdicts against Meta ...
A Colorado woman, Kimberly Osterman, celebrated recent verdicts against Meta and YouTube, which were found liable for harms to children due to platform design. Her son, Max Osterman, died in 2021 at age 18 after purchasing a fentanyl-laced pill through Snapchat. In Los Angeles, a jury ruled that Meta and YouTube designed their platforms to hook young users, and in New Mexico, Meta was found to have knowingly harmed children’s mental health and concealed information about child sexual exploitation. Snap Inc., the parent company of Snapchat, and TikTok settled before the Los Angeles trial began. Osterman is part of Parents for Safe Online Spaces, advocating for the Kids Online Safety Act, which would require social media platforms to take steps to prevent harm to minors. The drug dealer who sold Max the pill was sentenced to six years in prison in 2023.
Noah's family denied access to his Instagram account after his death during inquest into his suicide
An inquest heard that Fiona Donohoe was prevented from accessing her son Noah's Instagram account after his death in June 2020 due to a memorialisation request sent to Meta. The memorialisation request was made using an email address linked to a family that appeared at the inquest in February 2026. The family, including a teenage boy and his sister, denied involvement in the request and stated they did not know Noah before his disappearance. The mother of the teenagers confirmed she had no dealings with Meta or prior knowledge of Noah. Fiona Donohoe expressed distress over being locked out of her son's account and denied any involvement in the memorialisation process. The coroner granted anonymity to the family members who gave evidence behind a curtain.
Stalkerware app targets victims globally, exposing locations and messages without consent
Cybersecurity researchers from Kaspersky identified a new stalkerware app called MonitorMinor that enables covert surveillance of users' devices, including access to messages, location, and social media. The app bypasses standard security controls by gaining root access, allowing abusers to monitor victims without their knowledge. MonitorMinor can also extract sensitive files to unlock devices and erase its own digital traces, making it extremely difficult for victims to detect. The app is not available on major app stores like Google Play or the Apple Store, suggesting it does not meet standard privacy requirements. It has been most frequently installed in India and Mexico, with significant global reach. The Coalition Against Stalkerware, including NortonLifeLock, has raised concerns about the app's potential for abuse despite MonitorMinor's claims it is intended solely for parental monitoring.
Clearview AI's Facial Recognition App and Privacy Concerns Exposed by New York Times
Clearview AI, a secretive company founded by Hoan Ton-That and Richard Schwartz, developed a facial recognition app that scrapes over 3 billion images from social media and other websites. The app is used by over 600 law enforcement agencies to solve crimes but raises serious privacy concerns. The New York Times exposed the company's operations, highlighting the potential threat to privacy as we know it.
Caroline Koziol develops anorexia after TikTok and Instagram algorithm floods feed with extreme dieting content, joins landmark MDL
Caroline Koziol of Hartford, Connecticut began using Instagram and TikTok during the COVID-19 pandemic to search for at-home workouts and healthy recipes to support her swimming training. Within weeks, both platforms' recommendation algorithms had flooded her feeds with content promoting extreme workouts and disordered eating. 'One innocent search turned into this avalanche,' she said. Koziol, now 21, developed anorexia and is among more than 1,800 plaintiffs in the Social Media Adolescent Addiction/Personal Injury Products Liability MDL suing Meta and TikTok. She is not suing over specific content but over the platforms' defective recommendation design that maximized her engagement and drove her deeper into eating disorder content.
Basingstoke Man Jailed for AI Deepfake Romance Fraud Scam
A man from Basingstoke, Henry Nimo, was sentenced to 27 months in prison for his involvement in a romance fraud scam that used AI deepfake technology to impersonate a Danish entrepreneur. The scam targeted victims through Instagram, building fake romantic relationships and deceiving them into sending thousands of pounds. This case highlights the growing use of AI in financial fraud schemes.
Alexis Spence develops eating disorder at 12 after Instagram algorithm, testifies before Senate as Meta addiction trial plaintiff
Alexis Spence of Long Island, New York began using Instagram at age 11 by falsely entering a 13+ age, and the platform's algorithm exposed her to pro-anorexia content that contributed to an eating disorder developing by age 12. Alexis was one of several victims featured in coverage of the Social Media Adolescent Addiction MDL and has become a prominent plaintiff voice. Her case was cited in congressional testimony about the harms of social media design features to minors. The Spence family's lawsuit alleges that Instagram's algorithmic design was the proximate cause of Alexis's eating disorder, which required ongoing treatment.
Russia's Internet Research Agency targets U.S. with social media disinformation during 2016 election
The Senate Intelligence Committee revealed that Russia's Internet Research Agency used social media platforms including Facebook, Instagram, and Twitter to target African Americans and spread disinformation aimed at sowing racial discord during the 2016 U.S. election. The agency's content was heavily focused on race-related themes. This incident highlights foreign interference through digital platforms during a critical U.S. political event.
Alex Martin develops life-threatening anorexia and attempts suicide after Instagram algorithm drives her to pro-eating-disorder content from age 14
Beginning in 2016 when she was approximately 14 years old, Alexandra 'Alex' Martin of Georgetown, Kentucky began using Instagram, which algorithmically directed her to pro-anorexia groups and social comparison content she had not sought out. Her Instagram usage increased as the algorithm fed her more disordered eating content, and her mental and physical health declined progressively. Her eating disorder became life-threatening, requiring multiple hospital stays and treatment facility admissions. She also made two suicide attempts. She eventually deleted her Instagram account entirely. Martin, then 19, was named as a plaintiff in a lawsuit filed by the Social Media Victims Law Center in 2022 against Meta, alleging Instagram's dangerous and defective product design caused her injuries.
KGM sues Meta and Google over Instagram and YouTube addiction beginning at age 6, leading to depression and suicidal thoughts — first bellwether trial
A woman identified as KGM (Kaley G.M.) filed one of the first bellwether cases in the Social Media Adolescent Addiction MDL, alleging that Instagram and YouTube addiction beginning when she was approximately 6 years old led to clinical depression and suicidal thoughts. The lawsuit names Meta, Google, TikTok, and Snapchat, with Snap settling before trial. In January and February 2026, KGM's case became the first social media addiction case to proceed to jury trial in Los Angeles, with her mother Karen Glenn also testifying. Expert witnesses including Stanford psychiatry professor Anna Lembke testified that social media addiction is real and can cause or worsen anxiety, depression, and suicidal thoughts. The trial's outcome is expected to influence over 1,000 similar lawsuits.