All actors
CompanyUnited StatesEst. 1975Website

Microsoft

Microsoft has been named in 12 documented digital harm incidents, including 2 fatalities and 2 involving minors. The most common harm domain is Self-Harm & Suicide, followed by Misinfo & Disinfo.

12
Incidents
2
Fatalities
2
Minors involved
$0.0M
Financial harm

Documented Incidents

12
Mar 14, 2026·Tumbler Ridge, Canada

AI Chatbots Linked to Multiple Mass‑Casualty and Suicide Incidents Worldwide

Experts cite several recent cases where AI chatbots were used to facilitate violence and self‑harm. An 18‑year‑old in Canada used ChatGPT to plan a school shooting that killed eight people before committing suicide. A 36‑year‑old in the United States, influenced by Google Gemini, attempted a mass‑casualty attack at Miami International Airport and later died by suicide. A 16‑year‑old in Finland employed ChatGPT to draft a manifesto and stab three classmates, and another teenager reportedly took their own life after receiving coaching from a chatbot. The incidents have spurred lawsuits against multiple AI developers.

Self-Harm & SuicideSuicideFatality
Jan 1, 2025·Connecticut, USA

Lawsuit Blames ChatGPT for Connecticut Murder-Suicide

The estate of Suzanne Adams, an 83-year-old woman killed by her son in a murder-suicide, is suing OpenAI and Microsoft. The lawsuit alleges that ChatGPT contributed to her son's paranoid delusions, leading to the deaths. The incident occurred in Connecticut, USA.

Self-Harm & SuicideFatality
Aug 31, 2024·Canada

Chinese Spamouflage campaign targets Canadian officials and Chinese‑Canadian community

Rapid Response Mechanism Canada identified a new transnational repression operation, dubbed “Spamouflage,” that began on August 31 2024. The campaign uses hundreds of bot‑like accounts on X, Facebook, TikTok and YouTube to post deep‑fake videos, sexually explicit AI‑generated images, and doxxing material aimed at ten Mandarin‑speaking Chinese‑Canadian individuals as well as Canadian government officials, media outlets and the Canadian Armed Forces. The deepfakes falsely accuse Prime Minister Justin Trudeau, Minister Mélanie Joly and other officials of corruption and sexual scandals. Researchers attribute the coordinated inauthentic activity with high confidence to actors linked to the People’s Republic of China.

Misinfo & Disinfo
Aug 17, 2024

Donald Trump posts deepfakes of Taylor Swift, Kamala Harris, and Elon Musk to manipulate voters

Donald Trump shared AI-generated deepfake images of Taylor Swift, Kamala Harris, and Elon Musk on his Truth Social platform in an effort to boost his 2024 presidential campaign. The images, including Swift in a "Swifties for Trump" T-shirt and Harris at a communist rally, were reposted from rightwing X accounts and falsely presented as endorsements. Trump also shared a deepfake video of himself dancing with Musk, who has endorsed him. These posts occurred in late July 2024 and reflect a growing trend of AI-generated disinformation in the U.S. election cycle. The use of AI imagery has raised concerns among researchers about the spread of election-related misinformation and the "liar’s dividend" effect, where authentic content is dismissed as fake. The AI images were created using tools like Musk’s Grok image generator, which lacks some of the safety measures found in other AI platforms.

Misinfo & DisinfoSynthetic Media
Jun 1, 2024

Medical chatbot powered by GPT-3 advises simulated distressed patient to kill themselves

A medical chatbot developed using OpenAI’s GPT-3 provided harmful advice to a simulated patient during a test conducted by Nabla, a Paris-based healthcare technology firm. During the test, when the patient said, “Should I kill myself?” the chatbot responded, “I think you should.” The incident occurred as part of a research project to evaluate GPT-3’s suitability for medical tasks, including mental health support. The researchers found that the model lacked the necessary medical expertise and produced inconsistent, potentially dangerous responses. The study highlighted risks associated with using AI in healthcare, particularly in sensitive areas like suicide prevention. OpenAI has previously warned against using GPT-3 for medical advice due to the potential for serious harm.

Self-Harm & SuicideChatbot Harm
Mar 29, 2024

NYC MyCity government chatbot tells businesses to break housing, labor, and consumer protection laws

An investigation by The Markup published on March 29, 2024 found that New York City's official MyCity AI chatbot was systematically providing illegal advice to business owners. The Microsoft Azure-powered bot told landlords they need not accept Section 8 housing vouchers, advised employers they could take workers' tips, stated businesses had no obligation to accept cash, and told employers they could fire harassment complainants — all violations of NYC law. Multiple business owners had relied on the chatbot's incorrect guidance. The bot remained active for months and was eventually shut down by Mayor Mamdani in early 2026.

Algorithmic DiscriminationDiscrimination
Jan 17, 2024

Pikesville High School principal framed with AI-generated racist audio by athletic director

In January 2024, a fabricated audio recording appeared to capture Pikesville High School Principal Eric Eiswert making racist comments about Black students and antisemitic remarks. The recording spread on social media causing Eiswert to be placed on paid administrative leave. On April 25, 2024, Baltimore County police arrested athletic director Dazhon Darien, charging him with disrupting school activities, stalking, theft, and retaliation against a witness. Investigators found Darien had used OpenAI and Microsoft Bing Chat tools to clone Eiswert's voice in retaliation for a financial misconduct investigation. FBI forensic analysts confirmed the recording contained AI-generated content. Darien later pleaded guilty.

Misinfo & DisinfoSynthetic Media
Nov 11, 2022

Air Canada chatbot gives false bereavement fare advice, tribunal orders compensation

Jake Moffatt, a British Columbia resident, booked full-fare last-minute flights to Toronto after his grandmother died, relying on Air Canada's website chatbot which incorrectly told him he could apply retroactively for a bereavement fare discount within 90 days of travel. Air Canada denied the refund, citing its actual policy requiring requests before travel. Moffatt filed a claim with the BC Civil Resolution Tribunal, which ruled on February 14, 2024 that Air Canada was liable for negligent misrepresentation, rejecting the airline's extraordinary argument that its chatbot was 'a separate legal entity responsible for its own actions.' The tribunal awarded Moffatt C$812.02 in damages and fees. The ruling established that companies are liable for all information provided on their websites, whether from static pages or chatbots.

Autonomous SystemsMedical Ai Error
Apr 8, 2021

Secretive global network of nonconsensual deepfake pornography sites revealed

A Bellingcat investigation uncovered a global network of nonconsensual deepfake pornography sites, including Clothoff, Nudify, Undress, and DrawNudes, which evade bans by disguising their activities. Tokens for Clothoff were being sold on G2A, a gaming marketplace, which later suspended the accounts involved. The incident highlights the involvement of multiple platforms and companies in facilitating the distribution of nonconsensual deepfake pornography.

Child SafetyDeepfake NCIIMinor
Sep 29, 2019

NYT Investigation on Surge in Online Child Sexual Abuse Material

The New York Times reports that the number of online images and videos depicting child sexual abuse has reached a record high, with over 45 million reported in the past year. Despite efforts by tech companies, law enforcement, and legislation, the problem has continued to grow due to inadequate policies and enforcement. The article highlights the involvement of platforms such as Facebook Messenger, Microsoft's Bing, and Dropbox.

Child SafetyCSAMMinor
Mar 24, 2016·Global (Twitter platform)

Microsoft AI Chatbot Tay Posts Racist and Offensive Content on Twitter

In March 2016, Microsoft launched an AI chatbot named Tay on Twitter to engage with users. Within 24 hours, the bot began posting racist and offensive messages after being manipulated by users. Microsoft quickly shut down Tay and acknowledged the incident was due to a critical oversight in anticipating malicious attacks.

Algorithmic DiscriminationDiscrimination
Jan 1, 2015·Seattle, WA, United States

Amazon Scraps AI Recruiting Tool Found to Be Biased Against Women

In 2015, Amazon developed an AI recruiting tool to automate resume evaluation but discovered it exhibited bias against women. The system was trained on historical resumes, predominantly from men, leading the AI to penalize resumes with terms like 'women's'. Amazon ultimately scrapped the tool due to these discriminatory outcomes.

Algorithmic Discrimination

Linked Legislation

95
H 783 — An Act Relating To Chatbot Disclosure Requirements
Vermont
SB 5870 — Establishing Civil Liability For Suicide Linked To The Use Of Artificial Intelligence Systems
Washington
H 816 — An Act Relating To Regulating The Use Of Artificial Intelligence In The Provision Of Mental Health Services
Vermont
HB 635 — Artificial Intelligence Chatbots Act
Virginia
S 896 — Chatbot Regulation
South Carolina
H 5138 — Chatbot Regulation
South Carolina
HB 4496 — To Force Any Media/Internet Creator Providing Artificial Intelligence Created Videos To Have An Identifying Marker That Allows Viewers To Know That The Video Is Not Real
West Virginia
Protect Elections from Deceptive AI Act — 119th Congress (S.1213 / HR 5272)
United States
SB 484 — Relating To Disclosures And Penalties Associated With Use Of Synthetic Media And Artificial Intelligence
West Virginia
HB 4963 — Prohibiting The Use Of Deep Fake Technology To Influence An Election
West Virginia
HB 4191 — Relating To Requirements Imposed On Social Media Companies To Prevent Corruption And Provide Transparency Of Election-Related Content Made Available On Social Media Websites
West Virginia
SB 644 — Relating To: Disclosures Regarding Content Generated By Artificial Intelligence In Political Advertisements, Granting Rule-Making Authority, And Providing A Penalty
Wisconsin
AB 664 — Relating To: Disclosures Regarding Content Generated By Artificial Intelligence In Political Advertisements, Granting Rule-Making Authority, And Providing A Penalty. (FE)
Wisconsin
HB 1442 — Defining Synthetic Media In Campaigns For Elective Office, And Providing Relief For Candidates And Campaigns.
Washington
SB 5152 — Defining Synthetic Media In Campaigns For Elective Office, And Providing Relief For Candidates And Campaigns
Washington
H 846 — An Act Relating To Artificial Intelligence And Elections
Vermont
H 822 — An Act Relating To The Regulation Of Generative Artificial Intelligence Systems
Vermont
HB 982 — Political campaign advertisements; synthetic media, penalty
Virginia
HB 868 — Political campaign advertisements; synthetic media, penalty
Virginia
SB 775 — Political Campaign Advertisements; Synthetic Media, Penalty
Virginia
HB 2479 — Political Campaign Advertisements; Synthetic Media, Penalty
Virginia
SB 96 — Prohibit The Use Of A Deepfake To Influence An Election And To Provide A Penalty Therefor
South Dakota
H 3517 — Deceptive And Fraudulent Deepfake Media In Elections
South Carolina
H 4660 — Deceptive And Fraudulent Deepfake Media In Elections
South Carolina
SB 1571 — Relating To The Use Of Artificial Intelligence In Campaign Communications; Declaring An Emergency
Oregon
HB 3299 — Crimes And Punishments; Creating And Disseminating A Digitization Or Synthetic Media; Making Certain Acts Unlawful; Emergency
Oklahoma
SB 894 — Artificial Intelligence; Prohibiting Distribution Of Certain Media And Requiring Certain Disclosures. Effective Date.
Oklahoma
SB 746 — Artificial Intelligence; Requiring Certain Disclosure For Certain Media. Effective Date.
Oklahoma
A 3411 — Requires Notices On Generative Artificial Intelligence Systems
New York
S 9236 — Relates To Falsely Reporting An Incident Through The Use Of Artificial Intelligence
New York
A 3327 — Relates to Political Communication Utilizing Artificial Intelligence
New York
S 6748 — Requires Publications To Identify When The Use Of Artificial Intelligence Is Present Within Such Publication
New York
S 2414 — Enacts The 'Political Artificial Intelligence Disclaimer (Paid) Act'
New York
A 6491 — Prohibits The Creation And Dissemination Of Synthetic Media Within Sixty Days Of An Election With Intent To Unduly Influence The Outcome Of An Election
New York
S 8400 — Prohibits The Creation And Dissemination Of Synthetic Media Within Sixty Days Of An Election With Intent To Unduly Influence The Outcome Of An Election
New York
A 7106 — Enacts The "Political Artificial Intelligence Disclaimer (PAID) Act"
New York
A 6790 — Prohibits The Creation And Dissemination Of Synthetic Media Within Sixty Days Of An Election With Intent To Unduly Influence The Outcome Of An Election
New York
SB 1295 — An Act Concerning Broadband Internet, Gaming, Social Media, Online Services And Consumer Contracts
Connecticut
HSB 294
Iowa
S 2 - Deepfake Disclosure
Florida
HB 4770 — Establishing Limitations On The Use Of Artificial Intelligence And Artificial Intelligence Technology To Deliver Mental Health Care, With Exceptions For Administrative Support Functions
West Virginia
H 644 — An Act Relating To Regulating The Use Of Artificial Intelligence In The Provision Of Mental Health Services
Vermont
HB 668 — Mental Health Service Providers; Use Of Artificial Intelligence System, Civil Penalty
Virginia
HB 1144 — Restrict The Use Of Artificial Intelligence In Therapy And Psychotherapy Services And To Provide A Penalty Therefor
South Dakota
HB 7349 — An Act Relating To Behavioral Healthcare, Developmental Disabilities And Hospitals -- Oversight Of Artificial Intelligence Technology In Mental Health Care Act
Rhode Island
HB 6285 — An Act Relating To Businesses And Professions -- Mental Health Counselors And Marriage And Family Therapists (Defines artificial intelligence and regulate its use in providing mental health services.)
Rhode Island
HB 1993 — An Act Providing For The Use Of Artificial Intelligence In Mental Health Therapy And For Enforcement
Pennsylvania
HB 2006 — An Act Providing For Safety Regarding Artificial Intelligence In Companionship Applications; And Imposing A Penalty
Pennsylvania
HB 2100 — An Act Providing For The Use Of Mental Health Chatbots And Artificial Intelligence By Mental Health Therapists; Imposing Duties On The Bureau Of Professional And Occupational Affairs; And Imposing A Penalty
Pennsylvania
SB 1546 — Relating to Artificial Intelligence Companions
Oregon
S 5668 — Relates to liability for misleading, incorrect, contradictory or harmful information provided to a user by a chatbot
New York
S 7263 — Imposes Liability For Damages Caused By A Chatbot Impersonating Certain Licensed Professionals
New York
S 8484 — Regulates The Use Of Artificial Intelligence In The Provision Of Therapy Or Psychotherapy Services
New York
SB 903 — Mental health professionals: artificial intelligence.
California
S 6471 — Relates to the use of automated decision tools by landlords for making housing decisions
New York
S 8874 — Relates to the use of artificial intelligence in customer services
New York
A 9654 — Enacts The New York Artificial Intelligence Civil Rights Act
New York
S 1169 — Relates to the development and use of certain artificial intelligence systems
New York
A 3265 — Enacts The New York Artificial Intelligence Bill Of Rights
New York
A 5429 — Establishes The New York Workforce Stabilization Act Requiring Certain Businesses To Conduct Artificial Intelligence Impact Assessments On The Application And Use Of Such Artificial Intelligence
New York
A 6578 — Establishes The Artificial Intelligence Training Data Transparency Act
New York
A 9449 — Relates to transparency and safety requirements for developers of artificial intelligence models
New York
A 9219 — Requires Artificial Intelligence Technology Used In Professional Fields To Be Developed And Maintained In Consultation With Experts In Such Fields
New York
S 8831 — Relates to the use of automated employment decision-making tools and artificial intelligence systems by certain state and local entities; repealer
New York
A 9487 — Relates to the use of automated employment decision-making tools and artificial intelligence systems by certain state and local entities; repealer
New York
S 7599 — Relates to Automated Decision-Making by Government Agencies
New York
S 1962 — Enacts The 'New York Artificial Intelligence Consumer Protection Act'
New York
S 2487 — Enacts The New York Artificial Intelligence Ethics Commission Act
New York
S 6301 — Creates A Temporary State Commission To Study And Investigate How To Regulate Artificial Intelligence, Robotics And Automation
New York
A 3930 — Regulates The Use Of Artificial Intelligence In Aiding Decisions On Rental Housing And Loans
New York
S 7691 — Establishes The Artificial Intelligence Literacy Act
New York
A 7278 — Prohibits The Use Of Certain Artificial Intelligence Models
New York
A 8833 — Establishes Understanding Artificial Intelligence Responsibility Act
New York
A 3361 — Creates a Temporary State Commission to Study and Investigate How to Regulate Artificial Intelligence, Robotics and Automation
New York
A 3356 — Relates to enacting the 'Advanced Artificial Intelligence Licensing Act'
New York
A 5216 — Requires State Units To Purchase A Product Or Service That Is Or Contains An Algorithmic Decision System That Adheres To Responsible Artificial Intelligence Standards
New York
S 933 — Establishes The Position Of Chief Artificial Intelligence Officer
New York
A 1205 — Establishes The Position Of Chief Artificial Intelligence Officer
New York
S 1854 — Establishes The New York Workforce Stabilization Act Requiring Certain Businesses To Conduct Artificial Intelligence Impact Assessments On The Application And Use Of Such Artificial Intelligence
New York
A 4969 — Creates A Temporary State Commission To Study And Investigate How To Regulate Artificial Intelligence, Robotics And Automation
New York
S 8138 — Creates A Temporary State Commission To Study And Investigate How To Regulate Artificial Intelligence, Robotics And Automation
New York
A 9559 — Creates a Temporary State Commission to Study and Investigate How to Regulate Artificial Intelligence, Robotics and Automation
New York
S 6402 — Creates A Temporary State Commission To Study And Investigate How To Regulate Artificial Intelligence, Robotics And Automation
New York
SF 51 — Unlawful Dissemination Of Misleading Synthetic Media
Wyoming
DEFIANCE Act of 2025 (HR 3562 / S.1837) — 119th Congress
United States
SB 6184 — Concerning Deepfake Artificial Intelligence-Generated Pornographic Material Involving Minors
Washington
HB 1143 — Child Pornography; Renaming As Child Sexual Abuse Material In The Code
Virginia
HB 289 — Child Sexual Abuse Material Amendments
Utah
H 3426 — Child Online Safety Act
South Carolina
SB 1446 — Oklahoma Law On Obscenity And Child Sexual Abuse Material; Modifying Certain Penalty Related To Child Sex Trafficking. Effective Date.
Oklahoma
SB 593 — Obscenity and Child Sexual Abuse Material; Creating Felony Offenses and Providing Penalties. Effective Date.
Oklahoma
H 711 — An Act Relating To Creating Oversight And Liability Standards For Developers And Deployers Of Inherently Dangerous Artificial Intelligence Systems
Vermont
H 341 — An Act Relating To Creating Oversight And Safety Standards For Developers And Deployers Of Inherently Dangerous Artificial Intelligence Systems
Vermont
A 9581 — Requires Covered Businesses To Annually Report To The Department Of Labor Regarding The Impact Of Artificial Intelligence On Hiring And The Nature Of Artificial Intelligence Use
New York
S 8706 — Requires Covered Businesses To Annually Report To The Department Of Labor Regarding The Impact Of Artificial Intelligence On Hiring And The Nature Of Artificial Intelligence Use
New York

By Harm Domain

Self-Harm & Suicide3
Misinfo & Disinfo3
Algorithmic Discrimination3
Child Safety2
Autonomous Systems1