Facebook has been named in 47 documented digital harm incidents, including 3 fatalities and 9 involving minors. The most common harm domain is Fraud & Financial, followed by Misinfo & Disinfo.
Documented Incidents
47Retiree defrauded via pig butchering scam initiated on Facebook and encrypted messaging apps
A Bedford, Indiana retiree named Timothy Patton lost $10,000 to a pig-butchering scam after being targeted online through a fake investment group. The scam involved a fake advisor named "Sabrina" and a fraudulent trading platform that claimed he earned $15 million from his investment. Patton was contacted through Facebook and used encrypted messaging apps like WhatsApp and Signal to communicate with the scammers, who sent him a gold coin in the mail as part of the scam. He filed complaints with the FBI, the Federal Trade Commission, and the SEC, and WRTV Investigates confirmed the trading platform was fake. The Wisconsin Department of Financial Institutions filed a cease-and-desist order against "Sabrina" and the same platform, seeking $17,000 in restitution for a separate victim. The FBI reported that cryptocurrency investment scams, including pig-butchering, cost $5.8 billion in 2024, with people over 60 being the hardest hit.
20-year-old woman awarded $4.2 million after Meta and YouTube found liable for mental health harm via addictive platform design
On March 25, juries in Los Angeles, California, ruled that Meta and YouTube were liable for negligence in a case involving youth addiction and mental health. The plaintiff, a now 20-year-old woman known as Kaley G.M., claimed she became addicted to Instagram and YouTube during grade school, which contributed to her anxiety and depression. Meta was ordered to pay $4.2 million in damages, and YouTube was ordered to pay $1.8 million. The case is significant because it challenges Section 230 of the Communications Decency Act, which has previously shielded social media companies from liability. The ruling sets a legal precedent by suggesting that social media platforms can be held responsible for personal injury caused by their product design. Meta has stated it is considering an appeal.
Florida opens investigation into Discord over child safety failures and predator access
Florida is investigating the Discord app over child safety concerns, following reports of abductions and grooming. The investigation, led by Florida Attorney General James Uthmeier, claims the app puts children at risk by allowing predators to access young users. Discord is marketed as a communication platform for young people, similar to Facebook or Instagram, and is used by millions, including Gen Z users for gaming and social interaction. The state has issued subpoenas for marketing and promotional documents related to Discord, as well as other platforms like TikTok and Roblox. A 2022 safety message from Discord states the app includes tools to help users avoid inappropriate content or unwanted contact. The investigation is part of a broader push by Florida to address online safety risks for children.
Meta and Google sued over design features alleged to create child addiction in Los Angeles trial
A federal trial in Los Angeles is examining claims that Meta and Google deliberately engineered features such as infinite scroll, autoplay videos, and constant notifications to foster addiction among children. Plaintiffs argue these design elements function like a drug, citing internal documents and testimony from former Meta employee Arturo Béjar. The companies contend they have taken steps to make their platforms safer. The case is being compared to historic tobacco litigation and could set precedents for corporate responsibility in digital product design.
Los Angeles woman loses $81,000 and home in AI deepfake romance scam
In Los Angeles, California, a woman identified as Abigail was targeted by a deep‑fake romance scam that began on Facebook and continued on WhatsApp. Scammers used AI‑generated video and voice to impersonate actor Steve Burton, persuading her to send gift cards, cash and cryptocurrency totaling $81,000. They then pressured her to sell her condominium at a steep discount to a wholesale real‑estate company, causing her to lose the equity and her home. The LAPD recorded the losses, but the funds were not recovered, and the family pursued a civil lawsuit to contest the sale.
Hungarian election campaign undermined by AI-generated smears spread via Facebook fight groups
Hungary's 2026 election campaign was marked by AI-generated smear videos and a coordinated Facebook 'fight club' network used to amplify disinformation against opposition candidates. Fact-checkers documented synthetic media depicting fabricated statements by political figures, circulating widely before the vote.
AI-generated deepfake videos spread political disinformation in Bangladesh without platform intervention
AI-generated videos are spreading disinformation online in Bangladesh ahead of the 13th national election. A video featuring a woman resembling Rikta, a garment worker who lost her arm in the 2013 Rana Plaza collapse, falsely accused a political party of fraud and was shared over 21,000 times on the Uttarbanga Television Facebook page. The video, uploaded on 10 January, was identified as AI-generated after fact-checking by Prothom Alo. The Representation of the People Order prohibits the use of AI to create misleading content during elections, but such content continues to circulate. The Bangladesh Army issued a warning on 14 January about AI-generated videos misrepresenting military personnel, but the videos remain online. Authorities have yet to take action, despite the potential for such content to incite violence or confusion among voters.
Western New York couple defrauded by AI voice‑cloning scam
In East Aurora, New York, a couple reported that scammers used artificial‑intelligence voice‑cloning technology to impersonate the couple’s relative, Amy, and persuaded her elderly mother‑in‑law to wire nearly $10,000 as a fabricated bail payment. The fraudsters claimed Amy was in jail for a homicide and even sent a person in person to collect the cash. The victims filed a police report but have not received updates on the investigation. Experts cited the case as an example of how AI‑generated voice deepfakes are amplifying traditional financial scams.
Amnesty warns Meta over Facebook-fueled attacks on Bangladeshi media ahead of 2025 elections
Amnesty International issued a warning that Bangladesh faced heightened risk of human‑rights abuses ahead of its February 2025 parliamentary elections due to harmful content on Meta's Facebook platform. Misleading and inflammatory posts, many traced to India, amplified sectarian narratives and labeled local outlets The Daily Star and Prothom Alo as "Indian agents" and "anti‑national forces," sparking mob attacks on their offices in Dhaka on 18 December 2025. Bangladeshi authorities reported the incidents to Meta, citing delays in removing violent content, and Amnesty called for emergency mitigation measures and stronger safeguards to prevent online incitement from translating into real‑world violence.
MBA graduate in India loses Rs 9.29 lakh to pig butchering cryptocurrency scam
A 41-year-old MBA graduate from Bachupally, Hyderabad, lost Rs.9,29,786 in a pig-butchering fraud between November 5 and November 14. The victim received a Facebook friend request from a profile named Amrutha Chowdary, who claimed to have studied at the same college. She guided him through WhatsApp to a fake trading app, promising returns on gold-yield trading in US dollars. After an initial withdrawal of Rs.4,608, he was convinced to invest further, transferring Rs.8,84,394 into another bank account. When he attempted to withdraw the funds, the request failed, and he reported the incident to the cybercrime helpline 1930. Cyberabad cybercrime police have registered a case and are investigating.
Victims across the US defrauded by AI voice cloning scams impersonating family members
Patty Greiner lost $15,000 after receiving a text claiming her Amazon account was hacked and later being contacted by individuals impersonating IRS agents and law enforcement. Scammers are using AI to clone voices by extracting personal information from social media platforms like TikTok, Instagram, and Facebook. Cybersecurity expert Dave Hatter demonstrated how easily a voice can be cloned using free software, warning that this could lead to a surge in crime. Impersonators range from individuals to organized criminal gangs and nation-state actors from countries like China, Russia, and Iran. Experts advise not to use links or numbers provided by suspicious callers and to verify the legitimacy of requests directly with the organization or person involved.
AI‑generated political deepfakes targeting Pennsylvania officials ahead of 2026 elections
In October 2025, Republican candidate Stacy Garrity posted AI‑generated images of Democratic Governor Josh Shapiro on Facebook, and State Senator Doug Mastriano shared an AI‑generated video of Shapiro. The deepfakes, ranging from cartoon‑style pictures to a Hollywood‑sign meme, were designed to mislead voters ahead of the 2026 midterm elections. Experts from the American Association of Political Consultants, Quantum Communications, and MFStrategies warned about the expanding use of generative AI in political campaigns and urged greater voter media‑literacy. The incident coincided with Pennsylvania legislative efforts to regulate deepfakes and a conflicting executive order from President Trump.
Scammers spend $49 million on Meta deepfake political advertising targeting vulnerable users
Scammers spent $49 million on Meta platforms, including Facebook and Instagram, using deepfake videos of U.S. politicians and celebrities to promote fraudulent government benefit schemes, according to a report by the Tech Transparency Project. The investigation identified 63 scam advertisers responsible for over 150,000 political scam ads, often targeting seniors with fake stimulus checks and Medicare benefits. These ads used AI-generated deepfake videos to create a false sense of legitimacy. Despite Meta's policies against such scams and requirements for political ad verification, many ads remained online for days or weeks before removal. Nearly half of the scam advertisers were still active as of late September 2025. The incident has raised concerns about Meta's content moderation and ad review systems, prompting calls for stronger controls and transparency in online political advertising.
Couple defrauded of retirement savings through pig butchering scam initiated via Facebook
A family in Shoreline, Washington, lost nearly $40,000 in a "pig butchering" scam between May 2025 and November 2025. The scam began when the wife responded to a Facebook ad offering money for liking and sharing videos online. The Hovland family and others were paid initially, which led them to reinvest money in the scheme for about six months. The scam ended abruptly when the company’s website disappeared and they were told they were fired, with all their invested money gone. The FBI described the scam as a type of fraud where victims are lured with initial payments before being cut off and losing large sums. The FBI noted that these scams result in tens of billions of dollars in losses annually in the U.S., and recovering funds is extremely difficult once the scam is discovered.
Bay Area retiree loses $500,000 life savings to pig butchering scammer posing as romantic interest
A Bay Area retiree lost $500,000 — his life savings — after being romanced online by a scammer posing as a woman. Despite warnings from family and friends, he continued wiring money to fake cryptocurrency investment platforms. The FBI and Secret Service were unable to recover the funds.
Ontario senior thwarted AI voice‑cloned grandparent scam at CIBC branch
In 2021, Marilyn Crawford, a senior in Oshawa, Ontario, received a phone call that sounded like her grandson claiming he had been arrested and needed $9,000 for bail. The scammers used AI voice‑cloning technology to mimic her grandson’s voice, a tactic highlighted by recent FinCEN reports. Crawford traveled to a CIBC branch to withdraw the funds, but a bank employee flagged the transaction and a financial advisor contacted her son, preventing the loss. The incident shows how fraudsters harvest personal data from social media to create convincing deep‑fake voice scams targeting older adults.
AI-generated disinformation targets Bangladesh Nationalist Party members ahead of elections
As Bangladesh's elections approached, AI-generated disinformation campaigns targeted the BNP opposition party on Facebook. Fact-checking organizations documented fabricated audio and video clips attributed to political leaders, designed to suppress voter support and spread false narratives about the party.
Meta removes 2 million accounts linked to pig butchering scam networks across its platforms
Meta removed over 2 million accounts linked to "pig-butchering" scams in 2024, which involve scammers building fake online relationships to defraud victims of cryptocurrency investments. The scams often begin on dating apps or social media platforms like Facebook, Instagram, and WhatsApp, before moving to Telegram, which is known for limited moderation. In September 2024, the FBI reported that victims lost nearly $4 billion to crypto investment scams, primarily pig-butchering. Meta announced new measures, including automatically flagging potential scam messages and collaborating with other tech companies through the Tech Against Scams coalition. The company also took down accounts linked to a scam operation in Cambodia, which had used AI tools like ChatGPT to communicate with victims. Critics, however, argue that these efforts are insufficient and too slow to address the growing scale of the problem.
Chinese "Spamouflage" Influence Operation Uses Fake U.S. Voter Personas
Researchers at Graphika identified a Chinese state‑linked influence campaign, dubbed “Spamouflage,” that created a network of fake social‑media accounts impersonating U.S. voters, soldiers and a news outlet. The operation posted divisive content on X, TikTok, YouTube, Instagram and Facebook ahead of the 2024 presidential election, targeting topics such as reproductive rights, homelessness, Ukraine and Israel. Meta linked the network to Chinese law‑enforcement, while TikTok removed one of the accounts for policy violations after a video mocking President Biden amassed 1.5 million views. The campaign illustrates China’s use of deceptive online behavior to portray the United States as politically unstable.
Chinese Spamouflage campaign targets Canadian officials and Chinese‑Canadian community
Rapid Response Mechanism Canada identified a new transnational repression operation, dubbed “Spamouflage,” that began on August 31 2024. The campaign uses hundreds of bot‑like accounts on X, Facebook, TikTok and YouTube to post deep‑fake videos, sexually explicit AI‑generated images, and doxxing material aimed at ten Mandarin‑speaking Chinese‑Canadian individuals as well as Canadian government officials, media outlets and the Canadian Armed Forces. The deepfakes falsely accuse Prime Minister Justin Trudeau, Minister Mélanie Joly and other officials of corruption and sexual scandals. Researchers attribute the coordinated inauthentic activity with high confidence to actors linked to the People’s Republic of China.
Meta Settles Texas Biometric Privacy Lawsuit for $1.4 Billion
Meta has reached a $1.4 billion settlement with the Texas Attorney General over alleged violations of the Texas Biometric Privacy Law. The case involves unauthorized collection and use of biometric data from users of Meta's platforms, including Facebook and Instagram. This is reported to be the largest settlement of its kind in history.
Will County election worker awarded $46,000 after doxing campaign tied to social media
A Will County judge awarded Ellen Moriarty nearly $46,000 in damages following a trial under Illinois' Civil Liability for Doxing Act. The case involved a false Facebook post falsely attributing a statement praising the attempted assassination of former President Donald Trump to Moriarty. The post was shared by Michael Gondek, who testified he is a graphic design professional and admitted to manipulating the image. The judge ruled that the doxing caused Moriarty serious harm, including mental anguish and disruption to her life. The verdict is considered one of the first under Illinois' 2024 doxing law, which allows civil action against those who intentionally publish harmful, identifiable information online.
Scammers use AI deepfake to steal $25M from engineering firm Arup
In early 2024, scammers employed AI‑generated deepfake video and audio to impersonate the chief financial officer of British engineering firm Arup. The fraudsters convinced an employee to transfer roughly HK$200 million (about US$25 million) to five Hong Kong bank accounts. Hong Kong police identified the scheme after the employee reported the transfers, and Arup notified authorities. Experts warned that generative AI is lowering the barrier for sophisticated financial fraud and urged CFOs to adopt stricter verification controls.
Pig butchering victim recovers $1 million after ChatGPT helps identify scam operation
A San Jose widow, Margaret Loke, lost nearly $1 million in a crypto "pig-butchering" scam after a scammer posing as a romantic partner, "Ed," convinced her to invest in fake cryptocurrency platforms. The scam, which began in May 2024 via Facebook and WhatsApp, involved fabricated investment returns and emotional manipulation. Loke sent escalating amounts, including $490,000 from her IRA and $300,000 from a second mortgage, before realizing the scam when her account "froze." After consulting ChatGPT, she was alerted to the scam and reported it to the police. The funds were traced to a bank in Malaysia, where scammers withdrew them. Federal regulators warn that such relationship-based crypto scams are a growing threat, with limited chances of recovering funds once they leave U.S. banking systems.
Pro-Modi social media network spreads AI-generated disinformation during 2024 Indian election campaign
In early May 2024, Indian Prime Minister Narendra Modi and his ruling Bharatiya Janata Party (BJP) used the term "Vote Jihad" during election campaigning, which was later adopted by affiliated groups like the Vishwa Hindu Parishad (VHP) on social media platforms such as Facebook. A report by The London Story (TLS) found at least 21 instances in March and 33 in April where the BJP’s Facebook page and affiliated accounts spread Islamophobic narratives. The disinformation campaign targeted India’s 200 million Muslim voters and was part of a broader effort to amplify divisive rhetoric between Hindus and Muslims. A study by Oxford University noted that the BJP dominated digital campaigning on platforms like YouTube and WhatsApp, while other parties struggled to respond effectively. Meta, which owns Facebook and Instagram, approved ads containing hate speech and AI-manipulated content, despite pledging to prevent such material during the election. India’s press freedom has declined significantly, ranking 161 out of 180 countries in the 2023 World Press Freedom Index.
69-year-old man defrauded $164,000 via pig-butchering scam on Facebook
The Iowa Attorney General’s office warned about a rise in "pig-butchering" scams in Des Moines, Iowa. These scams involve con artists building trust with victims over time, often through social media, before asking for money via cryptocurrency. A 69-year-old man from southeast Iowa was scammed out of $164,000 after being befriended by a scammer posing as a young woman named "Delia" from Illinois. The scammer convinced him to invest in a fake company using Bitcoin, using fake investment statements to encourage larger payments. The victim borrowed against his Harley Davidson motorcycle titles to fund the scam. The Iowa Attorney General’s office urged people to be cautious of online strangers and avoid sending money through cryptocurrency.
AI-generated disinformation disrupts Bangladesh's 2024 general election campaign
A report by *The Daily Star* and cited in the *Financial Times* highlights the use of AI-generated disinformation in Bangladesh ahead of its January 2024 elections. Pro-government outlets and influencers have used AI tools like HeyGen to create fake news clips and deepfake videos targeting both the ruling party and opposition Bangladesh Nationalist Party (BNP). Examples include an AI-generated news anchor criticizing the U.S. and a deepfake video falsely showing an opposition leader downplaying support for Gazans. The disinformation is spreading on platforms like X and Facebook, with Meta removing some content after being contacted by the *Financial Times*. Experts warn that the lack of regulation and the potential for bad actors to falsely claim content is AI-generated could further erode public trust in information. The issue is part of a growing global concern about AI's role in elections, particularly in smaller markets that may be overlooked by major tech companies.
US victims lose billions to cryptocurrency pig butchering scams operating through social media platforms
Crypto investment scams, specifically "pig butchering" schemes, have caused significant financial losses, with over $1.9 billion reported in the first half of 2024, according to the FBI. Shai Plonski, a man from Sebastopol, California, was scammed after being groomed by a woman he met on a Facebook dating site, who convinced him to invest in cryptocurrency. After losing his life savings, Plonski discovered he had been a victim of a scam. The FBI and officials warn that these scams often involve long-term manipulation and can lead to victims liquidating assets like 401Ks or taking out loans. Additionally, ABC News found that many scammers in Southeast Asia, Africa, and South America are themselves victims of human trafficking, forced to work in scam compounds. A woman from South Africa, who was trafficked to Myanmar under the pretense of a customer service job, described being held in a scam compound and forced to target victims like Plonski.
Pig butchering victim recovers $140,000 after investigators trace cryptocurrency to scam wallets
Aleksey Madan, a 69-year-old victim of a cryptocurrency scam, recently recovered $140,000 he had lost to a fraudulent company called SpireBit. Massachusetts authorities seized the funds as part of an investigation into SpireBit, which targeted Russian-speaking seniors with fake investment opportunities. The scam, known as "pig butchering," involved building trust with victims before stealing large sums of money. SpireBit used social media ads with a fake Elon Musk endorsement and provided false information about executives and a London address. Massachusetts officials, following an NPR investigation, obtained a court order to freeze SpireBit’s assets on Binance and recovered $269,000, which is being returned to victims. The FBI reported that crypto scammers stole over $5.6 billion from Americans in 2022.
George Freeman MP targeted by AI deepfake video falsely claiming he defected to rival party
A British member of Parliament, George Freeman, was targeted by an AI-generated deepfake video falsely claiming he had defected to a rival political party. The incident occurred in late 2023 and was discussed in a parliamentary hearing in early 2024. During a hearing before the House of Commons Science, Innovation and Technology Committee, representatives from Meta, Google, and X (formerly Twitter) were questioned about how the deepfake spread on their platforms. The companies provided explanations of their policies but did not commit to specific actions to prevent similar incidents or address the spread of the fake video. Freeman criticized the platforms for failing to act decisively and called for legislation to protect individuals from identity theft and misuse through AI. The hearing highlighted concerns about the spread of political misinformation and its threat to democratic processes in the UK.
Slovak election campaign targeted by AI deepfake disinformation spread by trolls
Trolls in Slovakia used AI-generated deepfake voices of politicians to spread disinformation ahead of the parliamentary elections, which took place in early October 2023. The deepfake videos, featuring audio impersonating political figures like Michal Šimečka and Zuzana Čaputová, were shared on platforms such as Facebook, Instagram, and Telegram. The content was found to be synthesized using AI tools trained on real voice samples, with some clips remaining online without disclaimers. Meta stated that political posts are not subject to fact-checking to preserve free speech, but fact-checkers continue to debunk false claims. The use of AI deepfakes in this election highlighted growing concerns about disinformation and its potential to influence voter behavior in closely contested races. Researchers noted that deepfake technology has become more accessible, enabling coordinated manipulation efforts.
Father Loses $280K to 'Pig Butchering' Scam on Facebook
A father lost $280,000 after falling victim to a 'pig butchering' scam on Facebook. The scam involved fraudulent financial transactions, emphasizing the growing threat of such scams on social media platforms. The incident underscores the importance of vigilance and protective measures against online financial fraud.
Santa Monica Software Developer Loses $740,000 in Pig Butchering Scam
Warren Dang, a Santa Monica software developer, lost $740,000 after a scammer posing as a romantic partner named 'Jenny' on a dating app lured him into a fake cryptocurrency investment platform. The scam unfolded over weeks as the scammer built emotional trust before convincing him to transfer escalating sums. Dang was among hundreds of Californians defrauded in pig butchering schemes that year, with statewide losses exceeding $1.1 billion in 2023.
Deepfake Pornography Victim in Sheffield, UK Sparks Legal Action and Awareness
Helen Mort, a poet and broadcaster from Sheffield, UK, discovered that deepfake pornography had been created using her images without her consent. The deepfake content was generated from photos she had posted on her now-deleted Facebook profile and depicted her in violent pornographic material. The incident has drawn attention to the growing issue of non-consensual deepfake pornography and has contributed to legal efforts to address such harms.
Amnesty International Reports Facebook Algorithms Promoted Violence Against Rohingya in Myanmar Genocide
Amnesty International found that Facebook's algorithms proactively promoted anti-Rohingya hate content in Myanmar, contributing to violence during the 2017 genocide. Meta failed to act despite awareness of the risks and profited from engagement generated by such content. The incident highlights the role of algorithmic amplification in real-world harm.
Facebook's System Approved Dehumanizing Hate Speech Inciting Genocide During Ethiopia Civil War
In June 2022, Global Witness and Foxglove tested Facebook's content moderation system by submitting ads containing dehumanizing hate speech inciting genocide in Ethiopia. Despite the explicit nature of the content, Facebook's system approved the ads. After being informed of the issue, Meta acknowledged the problem.
Global Witness Report: Facebook Approves Hate Speech Ads Targeting Rohingya in Myanmar
Global Witness found that Facebook approved advertisements containing hate speech targeting the Rohingya Muslim minority in Myanmar. Despite Facebook's claims of improved hate speech detection in Burmese, eight test ads with hate speech were submitted and all were approved for public display. The incident highlights concerns about Facebook's moderation practices and algorithmic amplification of harmful content.
Facebook whistleblower Frances Haugen testifies on Instagram's harmful effects on children and societal division
Frances Haugen, a former Facebook employee, testified before the Senate Commerce Subcommittee, revealing internal research that showed Facebook was aware of Instagram's harmful effects on teenage girls' mental health. She accused the company of prioritizing profit over user safety and called for government intervention.
Over 2,000 families sue Meta, TikTok, Snapchat, and YouTube over children's mental health harms
More than 2,000 families are suing social media companies including TikTok, Snapchat, YouTube, Roblox, and Meta (parent company of Instagram and Facebook) over the impact of social media on children's mental health. The lawsuits allege that platforms like Instagram contributed to the development of depression and eating disorders in minors. One case involves the Spence family from Long Island, New York, whose daughter Alexis developed an eating disorder at age 12 after using Instagram, which she accessed by falsely checking a 13+ age box. Alexis reported that Instagram's algorithm led her to pro-anorexia content, which normalized disordered eating behaviors and worsened her mental health. The lawsuits are expected to move forward in 2024, with over 350 cases anticipated to proceed.
Facebook collects Illinois users' biometric data without consent, $650 million BIPA settlement
Illinois Facebook users who participated in a $650 million biometric privacy settlement received a third and final payment of $7.20 in early December 2023. The settlement, approved in February 2021, was the result of an 8.5-year lawsuit filed in 2015 by Chicago attorney Jay Edelson on behalf of plaintiff Carlo Licata, alleging Facebook violated Illinois privacy law by using facial recognition without consent. The settlement covered about 7 million Illinois Facebook users who had face templates created after June 7, 2011, with over 1 million claimants receiving a total of about $435 each after three payments. The Illinois Biometric Information Privacy Act (BIPA), passed in 2008, requires companies to obtain consent before using biometric data. The settlement’s remaining funds will be donated to the American Civil Liberties Union of Illinois after the final distribution.
Woman whose son died from drugs bought on social media celebrates verdicts against Meta ...
A Colorado woman, Kimberly Osterman, celebrated recent verdicts against Meta and YouTube, which were found liable for harms to children due to platform design. Her son, Max Osterman, died in 2021 at age 18 after purchasing a fentanyl-laced pill through Snapchat. In Los Angeles, a jury ruled that Meta and YouTube designed their platforms to hook young users, and in New Mexico, Meta was found to have knowingly harmed children’s mental health and concealed information about child sexual exploitation. Snap Inc., the parent company of Snapchat, and TikTok settled before the Los Angeles trial began. Osterman is part of Parents for Safe Online Spaces, advocating for the Kids Online Safety Act, which would require social media platforms to take steps to prevent harm to minors. The drug dealer who sold Max the pill was sentenced to six years in prison in 2023.
Stalkerware app targets victims globally, exposing locations and messages without consent
Cybersecurity researchers from Kaspersky identified a new stalkerware app called MonitorMinor that enables covert surveillance of users' devices, including access to messages, location, and social media. The app bypasses standard security controls by gaining root access, allowing abusers to monitor victims without their knowledge. MonitorMinor can also extract sensitive files to unlock devices and erase its own digital traces, making it extremely difficult for victims to detect. The app is not available on major app stores like Google Play or the Apple Store, suggesting it does not meet standard privacy requirements. It has been most frequently installed in India and Mexico, with significant global reach. The Coalition Against Stalkerware, including NortonLifeLock, has raised concerns about the app's potential for abuse despite MonitorMinor's claims it is intended solely for parental monitoring.
Clearview AI's Facial Recognition App and Privacy Concerns Exposed by New York Times
Clearview AI, a secretive company founded by Hoan Ton-That and Richard Schwartz, developed a facial recognition app that scrapes over 3 billion images from social media and other websites. The app is used by over 600 law enforcement agencies to solve crimes but raises serious privacy concerns. The New York Times exposed the company's operations, highlighting the potential threat to privacy as we know it.
NYT Investigation on Surge in Online Child Sexual Abuse Material
The New York Times reports that the number of online images and videos depicting child sexual abuse has reached a record high, with over 45 million reported in the past year. Despite efforts by tech companies, law enforcement, and legislation, the problem has continued to grow due to inadequate policies and enforcement. The article highlights the involvement of platforms such as Facebook Messenger, Microsoft's Bing, and Dropbox.
Cambridge Analytica harvests Facebook data of 87 million users without consent for political targeting
In March 2018, The Guardian and New York Times revealed that Cambridge Analytica had harvested the personal data of up to 87 million Facebook users without their consent. The data was used for political purposes, including influencing the 2016 U.S. presidential election and the Brexit vote. The data was collected through an app called 'thisisyourdig', raising significant concerns about privacy and surveillance.
Russia's Internet Research Agency targets U.S. with social media disinformation during 2016 election
The Senate Intelligence Committee revealed that Russia's Internet Research Agency used social media platforms including Facebook, Instagram, and Twitter to target African Americans and spread disinformation aimed at sowing racial discord during the 2016 U.S. election. The agency's content was heavily focused on race-related themes. This incident highlights foreign interference through digital platforms during a critical U.S. political event.
Facebook Emotional Contagion Experiment Without User Consent
In 2012, Facebook conducted a study where it manipulated the news feeds of nearly 700,000 users to observe emotional responses, altering content to be more positive or negative. The experiment was carried out without explicit user consent beyond the general terms of data use. The incident sparked significant controversy over user privacy and ethical research practices.