YouTube
YouTube has been named in 30 documented digital harm incidents, including 3 fatalities and 6 involving minors. The most common harm domain is Misinfo & Disinfo, followed by Privacy & Surveillance.
Documented Incidents
30Actress subjected to AI deepfake video impersonating her likeness distributed via YouTube
Veteran actress Yeom Hye Ran became a victim of an AI deepfake rights violation when an unauthorized AI-generated video using her likeness was uploaded to YouTube on March 31. Her agency, Ace Factory, confirmed the video was produced without consent and was later removed. The incident followed a previous controversy involving the AI film 'The Inspector,' which used Yeom Hye Ran’s likeness without proper authorization. The misuse of AI in film production has raised concerns about portrait rights violations, a topic that gained global attention during the 2023 Hollywood strikes. The Hollywood strikes, which lasted 118 days, led to agreements on AI usage regulations, wage increases, and improved residuals, but similar issues are now emerging in the Korean film industry. The incident highlights the urgent need for proactive measures to prevent AI-related privacy and rights violations.
20-year-old woman awarded $4.2 million after Meta and YouTube found liable for mental health harm via addictive platform design
On March 25, juries in Los Angeles, California, ruled that Meta and YouTube were liable for negligence in a case involving youth addiction and mental health. The plaintiff, a now 20-year-old woman known as Kaley G.M., claimed she became addicted to Instagram and YouTube during grade school, which contributed to her anxiety and depression. Meta was ordered to pay $4.2 million in damages, and YouTube was ordered to pay $1.8 million. The case is significant because it challenges Section 230 of the Communications Decency Act, which has previously shielded social media companies from liability. The ruling sets a legal precedent by suggesting that social media platforms can be held responsible for personal injury caused by their product design. Meta has stated it is considering an appeal.
Rise in Deepfake-Enabled Corporate Fraud Costs U.S. Companies $1.1 Billion in 2025
In 2025, U.S. corporations suffered an estimated $1.1 billion loss due to deepfake‑enabled fraud, a threefold increase from the previous year. The article cites a 2019 scam in which a British energy executive wired $243,000 after believing they were speaking with their CEO, and a recent Italian scheme that used a cloned voice of the defence minister to extract nearly €1 million. It explains how executives' public appearances provide training data for attackers, enabling synthetic videos and voice calls that authorize fraudulent transactions. The piece urges companies to develop crisis protocols, tabletop exercises, and coordinated response plans involving legal, cybersecurity, and communications teams.
Meta and Google sued over design features alleged to create child addiction in Los Angeles trial
A federal trial in Los Angeles is examining claims that Meta and Google deliberately engineered features such as infinite scroll, autoplay videos, and constant notifications to foster addiction among children. Plaintiffs argue these design elements function like a drug, citing internal documents and testimony from former Meta employee Arturo Béjar. The companies contend they have taken steps to make their platforms safer. The case is being compared to historic tobacco litigation and could set precedents for corporate responsibility in digital product design.
Deepfake video falsely depicts Indian army chief sharing Iranian ship coordinates with Israel
A deepfake video of General Upendra Dwivedi, Chief of the Indian Army Staff, falsely claimed he admitted to sharing coordinates of an Iranian naval ship with Israel. The video was posted on an X account named "PLA Military Updates" and gained 15,900 views. Thai PBS Verify confirmed the video was AI-generated using Hive Moderation and AI Video Detector tools. A reverse image search traced the visuals to a Getty Images photo from February 26, 2026, showing Indian and Israeli prime ministers shaking hands. The original footage, from a March 7, 2026, YouTube video, showed General Dwivedi discussing military strategy and modernization, with no mention of Iran or Israel. Thai PBS Verify concluded the claim was fake news.
Nine-year-old Texas child dies attempting social media challenge after algorithm repeatedly surfaced content
A 9-year-old girl named JackLynn Blackwell from Stephenville, Texas, died after participating in a dangerous social media challenge known as the "blackout challenge," in which individuals intentionally choke themselves for a brief euphoric high. The incident occurred in her family's backyard in April 2024. JackLynn was found unconscious with a cord wrapped around her neck and later died. Her parents believe she was imitating videos she had seen online, and she became one of 80 documented deaths from the challenge, according to the CDC. The Blackwell family is now advocating for greater accountability from social media companies and calling attention to the risks of unregulated content. Some social media platforms have implemented warnings or blocked searches for the challenge, but videos promoting the act remain accessible.
Zuckerberg Testifies in Landmark Teen Social Media Addiction Trial in Los Angeles
Meta CEO Mark Zuckerberg testified in person at a Los Angeles trial brought by KGM, a 20-year-old plaintiff who claims compulsive Instagram use worsened her mental health. Zuckerberg acknowledged that Meta had improved its age verification and safety features but admitted the company had not acted quickly enough. Plaintiffs' lawyers challenged his testimony, arguing Meta's platform design intentionally creates addiction in young users. The trial is one of a series of bellwether cases that could shape hundreds of similar lawsuits nationwide.
Texas family warns of blackout challenge after child dies attempting TikTok-spread stunt
A 9-year-old girl from Stephenville, Texas, JackLynn Blackwell, died on February 3, 2026, after apparently attempting the "blackout challenge," a dangerous social media dare involving self-choking. Her parents believe she saw a video of the challenge on YouTube and tried to replicate it. The Centers for Disease Control and Prevention (CDC) reports that 80 people have died from this challenge, also known as the "choking game" or "pass-out challenge." The Blackwell family is raising awareness about the risks of viral social media challenges and the role of algorithmic content recommendations in exposing children to harmful content. In Delaware, six families have sued TikTok over similar incidents, and the Blackwells hope their case will lead to increased platform accountability.
AI-generated deepfake videos spread political disinformation in Bangladesh without platform intervention
AI-generated videos are spreading disinformation online in Bangladesh ahead of the 13th national election. A video featuring a woman resembling Rikta, a garment worker who lost her arm in the 2013 Rana Plaza collapse, falsely accused a political party of fraud and was shared over 21,000 times on the Uttarbanga Television Facebook page. The video, uploaded on 10 January, was identified as AI-generated after fact-checking by Prothom Alo. The Representation of the People Order prohibits the use of AI to create misleading content during elections, but such content continues to circulate. The Bangladesh Army issued a warning on 14 January about AI-generated videos misrepresenting military personnel, but the videos remain online. Authorities have yet to take action, despite the potential for such content to incite violence or confusion among voters.
Founder of pcTattletale pleads guilty to hacking and selling stalkerware
Bryan Fleming, the founder of U.S. spyware firm pcTattletale, pleaded guilty in a San Diego federal court to charges of computer hacking, illegal sale and advertising of surveillance software, and conspiracy. The plea follows a Homeland Security Investigations probe that began in 2021 after uncovering more than 100 stalkerware websites linked to the company. pcTattletale shut down in 2024 after a data breach exposed information on over 138,000 customers, and federal agents later raided Fleming’s home in Michigan. This marks the first successful U.S. federal prosecution of a stalkerware operator in over a decade.
voice_cloning_fraud: AI-generated covers of songs uploaded to Spotify profile — Spotify, YouTube
Independent folk musician Murphy Campbell discovered that AI-generated covers of her songs had been uploaded to her Spotify profile without her permission. The AI voice clones were created using recordings scraped from her YouTube channel and uploaded under her name, constituting copyright fraud and unauthorized use of her likeness.
Speedway employees receive $12.1 million in BIPA settlement over unauthorised biometric data collection
A federal court in the Northern District of Illinois approved a $12.1 million settlement resolving a class‑action lawsuit against Speedway, a convenience‑store and gas‑station chain, for allegedly violating the Illinois Biometric Information Privacy Act (BIPA). The case stemmed from Speedway’s practice of requiring employees to scan their fingerprints for time‑keeping without obtaining the written informed consent required by BIPA. The settlement will distribute the funds equally among roughly 7,700 current and former employees after attorneys’ fees are deducted.
Chinese Social Media Influencer 'Sister Orange' Arrested in Cambodia for Pig Butchering and Human Trafficking
Zhang Mucheng, a Chinese social media influencer known as 'Sister Orange' with over 100,000 followers, was arrested in Phnom Penh, Cambodia on charges of fraud and human trafficking. Cambodian authorities stated she worked with criminal gangs in Cambodia and China to traffic victims into scam compounds between October and November 2025. Her social media accounts were suspended following the arrest. The case drew international attention as a rare instance of an influencer-linked figure being held accountable in the transnational pig butchering ecosystem.
South Korean Election Authorities File Complaints Against YouTubers for AI Deepfake Smears
South Korean election authorities filed complaints against YouTubers for spreading AI-generated deepfake content that smeared political candidates during an election. The deepfakes were designed to damage the reputations of individuals running for office. Authorities are taking action to address the misuse of AI in this context. The incident highlights growing concerns about deepfake technology being used for disinformation in political campaigns.
Los Angeles jury finds Meta and Google liable for social media addiction harming Kaley
A jury in a landmark social media addiction trial in Los Angeles is deliberating whether Meta or YouTube is liable for the mental health issues of a 20-year-old woman, identified as Kaley G.M., who claims the platforms contributed to her depression and suicidal thoughts as a child. The trial, which began in March 2024, has raised questions about whether the platforms were negligently designed and whether they should have warned users about potential harm. Kaley testified that she became addicted to YouTube and Instagram starting at age six, though she also described family-related trauma. The case could set a precedent for thousands of similar lawsuits, as it challenges the legal protection provided by Section 230 of the US Communications Decency Act. The jury is considering whether Meta or YouTube were "substantial factors" in causing Kaley’s mental health struggles and how much in damages should be awarded. The trial highlights growing concerns about the impact of social media on vulnerable young users and the responsibility of tech companies for harmful content and design.
Chinese "Spamouflage" Influence Operation Uses Fake U.S. Voter Personas
Researchers at Graphika identified a Chinese state‑linked influence campaign, dubbed “Spamouflage,” that created a network of fake social‑media accounts impersonating U.S. voters, soldiers and a news outlet. The operation posted divisive content on X, TikTok, YouTube, Instagram and Facebook ahead of the 2024 presidential election, targeting topics such as reproductive rights, homelessness, Ukraine and Israel. Meta linked the network to Chinese law‑enforcement, while TikTok removed one of the accounts for policy violations after a video mocking President Biden amassed 1.5 million views. The campaign illustrates China’s use of deceptive online behavior to portray the United States as politically unstable.
Chinese Spamouflage campaign targets Canadian officials and Chinese‑Canadian community
Rapid Response Mechanism Canada identified a new transnational repression operation, dubbed “Spamouflage,” that began on August 31 2024. The campaign uses hundreds of bot‑like accounts on X, Facebook, TikTok and YouTube to post deep‑fake videos, sexually explicit AI‑generated images, and doxxing material aimed at ten Mandarin‑speaking Chinese‑Canadian individuals as well as Canadian government officials, media outlets and the Canadian Armed Forces. The deepfakes falsely accuse Prime Minister Justin Trudeau, Minister Mélanie Joly and other officials of corruption and sexual scandals. Researchers attribute the coordinated inauthentic activity with high confidence to actors linked to the People’s Republic of China.
Canadian company enables white supremacists to fundraise through hateful livestreams on multiple platforms
A Canadian company called Entropy, launched in Calgary in 2019, has been found to facilitate fundraising for white supremacists and other extremists through livestreams containing racist and antisemitic content. The platform allows viewers to donate directly to creators, many of whom were banned from mainstream platforms like YouTube. By 2021, Entropy had processed over $3 million in transactions and has since become a key financial tool for hate groups such as the Goyim Defense League (GDL). In July 2024, GDL members livestreamed an antisemitic harassment campaign in Nashville, Tennessee, during which a Jewish man and a biracial man were assaulted. The Southern Poverty Law Center has filed a lawsuit against GDL members, citing their hate-for-profit model. The founders of Entropy, Emmanuel and Rachel Constantinidis and David Bell, moved to Tbilisi, Georgia in 2022 but continue to operate the company under a registered corporation in Alberta.
WPP CEO Mark Read targeted by deepfake video call scam impersonating his identity
The CEO of WPP, the world’s largest advertising company, Mark Read, was targeted by a deepfake scam involving an AI voice clone and a fake Microsoft Teams meeting. The fraudsters created a WhatsApp account using a publicly available image of Read and impersonated him during a virtual meeting with another senior executive. The scam aimed to solicit money and personal information from an agency leader but was unsuccessful. WPP confirmed the phishing attempt was prevented due to the vigilance of employees. The incident highlights the growing use of AI and deepfake technology in corporate fraud, with similar attacks targeting financial institutions and other organizations in recent years. WPP has also reported being targeted by fake websites using its brand name and is working with authorities to address the issue.
Pro-Modi social media network spreads AI-generated disinformation during 2024 Indian election campaign
In early May 2024, Indian Prime Minister Narendra Modi and his ruling Bharatiya Janata Party (BJP) used the term "Vote Jihad" during election campaigning, which was later adopted by affiliated groups like the Vishwa Hindu Parishad (VHP) on social media platforms such as Facebook. A report by The London Story (TLS) found at least 21 instances in March and 33 in April where the BJP’s Facebook page and affiliated accounts spread Islamophobic narratives. The disinformation campaign targeted India’s 200 million Muslim voters and was part of a broader effort to amplify divisive rhetoric between Hindus and Muslims. A study by Oxford University noted that the BJP dominated digital campaigning on platforms like YouTube and WhatsApp, while other parties struggled to respond effectively. Meta, which owns Facebook and Instagram, approved ads containing hate speech and AI-manipulated content, despite pledging to prevent such material during the election. India’s press freedom has declined significantly, ranking 161 out of 180 countries in the 2023 World Press Freedom Index.
Deepfake audio falsely depicts Philippines President Marcos ordering military attack on China
On April 23, 2024, a fabricated audio clip circulated on YouTube depicting Philippine President Ferdinand Marcos Jr. ordering his armed forces to take action against China amid escalating South China Sea tensions. The clip spread just as the Philippines and US began the annual Balikatan military exercises involving over 16,000 troops. The Presidential Communications Office immediately debunked the recording, stating 'no such directive exists,' and attributed the deepfake to a foreign actor. Philippine authorities worked with agencies and private sector stakeholders to remove the content and announced they would file cases against those responsible. The incident illustrated how AI-generated audio can be weaponised to inflame geopolitical tensions during sensitive military periods.
Misinformation about Israeli Prime Minister Benjamin Netanyahu’s whereabouts debunked
On March 13, 2024, social media users circulated false claims that Israeli Prime Minister Benjamin Netanyahu had been assassinated or was missing, citing a video alleged to show a six‑finger deep‑fake frame. The rumors spread on platforms such as X and YouTube. Netanyahu’s office, referencing a statement to Anadolu Ajansi, issued a clarification that the Prime Minister is alive and well, refuting the deep‑fake allegations. The incident highlights the rapid propagation of political disinformation during the West Asia conflict.
George Freeman MP targeted by AI deepfake video falsely claiming he defected to rival party
A British member of Parliament, George Freeman, was targeted by an AI-generated deepfake video falsely claiming he had defected to a rival political party. The incident occurred in late 2023 and was discussed in a parliamentary hearing in early 2024. During a hearing before the House of Commons Science, Innovation and Technology Committee, representatives from Meta, Google, and X (formerly Twitter) were questioned about how the deepfake spread on their platforms. The companies provided explanations of their policies but did not commit to specific actions to prevent similar incidents or address the spread of the fake video. Freeman criticized the platforms for failing to act decisively and called for legislation to protect individuals from identity theft and misuse through AI. The hearing highlighted concerns about the spread of political misinformation and its threat to democratic processes in the UK.
Two men killed in driverless Tesla crash in Spring, Texas after vehicle strikes tree and catches fire
Two men died in a Tesla crash in Spring, Texas, where no one was found behind the wheel, according to local police. The 2019 Tesla Model S crashed into a tree and caught fire, with one person in the front passenger seat and another in the rear. Preliminary investigations suggest no driver was present at the time of the crash. The incident has raised questions about Tesla's Autopilot and Full Self-Driving (FSD) systems, which are not fully autonomous. The National Highway Traffic Safety Administration (NHTSA) has launched a special investigation into the crash.
Clearview AI's Facial Recognition App and Privacy Concerns Exposed by New York Times
Clearview AI, a secretive company founded by Hoan Ton-That and Richard Schwartz, developed a facial recognition app that scrapes over 3 billion images from social media and other websites. The app is used by over 600 law enforcement agencies to solve crimes but raises serious privacy concerns. The New York Times exposed the company's operations, highlighting the potential threat to privacy as we know it.
18-year-old girl dies by suicide after using Meta and YouTube platforms
In 2020, an 18-year-old named Annalee Schott took her own life, which her family attributed in part to the negative effects of social media. The Schott family has since blamed platforms like Meta and YouTube for harming children's mental health through addictive design. The article raises the question of whether legal or regulatory actions against these companies could mark a turning point for Big Tech, similar to the tobacco industry's past reckoning. The focus is on potential consequences for tech companies if they are held accountable for youth harm.
Caleb Cain's Radicalization via YouTube's Algorithm
A 26-year-old man from West Virginia, Caleb Cain, was radicalized by far-right content on YouTube over several years. He described how the platform's recommendation algorithm exposed him to extremist ideologies, including white supremacy and anti-feminism. The incident highlights concerns about algorithmic amplification of harmful content on YouTube.
Russia's Internet Research Agency targets U.S. with social media disinformation during 2016 election
The Senate Intelligence Committee revealed that Russia's Internet Research Agency used social media platforms including Facebook, Instagram, and Twitter to target African Americans and spread disinformation aimed at sowing racial discord during the 2016 U.S. election. The agency's content was heavily focused on race-related themes. This incident highlights foreign interference through digital platforms during a critical U.S. political event.
KGM sues Meta and Google over Instagram and YouTube addiction beginning at age 6, leading to depression and suicidal thoughts — first bellwether trial
A woman identified as KGM (Kaley G.M.) filed one of the first bellwether cases in the Social Media Adolescent Addiction MDL, alleging that Instagram and YouTube addiction beginning when she was approximately 6 years old led to clinical depression and suicidal thoughts. The lawsuit names Meta, Google, TikTok, and Snapchat, with Snap settling before trial. In January and February 2026, KGM's case became the first social media addiction case to proceed to jury trial in Los Angeles, with her mother Karen Glenn also testifying. Expert witnesses including Stanford psychiatry professor Anna Lembke testified that social media addiction is real and can cause or worsen anxiety, depression, and suicidal thoughts. The trial's outcome is expected to influence over 1,000 similar lawsuits.
GamerGate Movement and Online Harassment of Feminist Critics
In August 2014, the #GamerGate movement emerged, leading to widespread online harassment and death threats against feminist critics such as Anita Sarkeesian and indie game developer Zoe Quinn. The movement was sparked by a blog post from Eron Gjoni about his breakup with Quinn, which led to coordinated online attacks. The harassment occurred across multiple platforms including Twitter, 4chan, IRC, and others.