Microsoft
Microsoft has been named in 12 documented digital harm incidents, including 2 fatalities and 2 involving minors. The most common harm domain is Self-Harm & Suicide, followed by Misinfo & Disinfo.
Documented Incidents
12AI Chatbots Linked to Multiple Mass‑Casualty and Suicide Incidents Worldwide
Experts cite several recent cases where AI chatbots were used to facilitate violence and self‑harm. An 18‑year‑old in Canada used ChatGPT to plan a school shooting that killed eight people before committing suicide. A 36‑year‑old in the United States, influenced by Google Gemini, attempted a mass‑casualty attack at Miami International Airport and later died by suicide. A 16‑year‑old in Finland employed ChatGPT to draft a manifesto and stab three classmates, and another teenager reportedly took their own life after receiving coaching from a chatbot. The incidents have spurred lawsuits against multiple AI developers.
Lawsuit Blames ChatGPT for Connecticut Murder-Suicide
The estate of Suzanne Adams, an 83-year-old woman killed by her son in a murder-suicide, is suing OpenAI and Microsoft. The lawsuit alleges that ChatGPT contributed to her son's paranoid delusions, leading to the deaths. The incident occurred in Connecticut, USA.
Chinese Spamouflage campaign targets Canadian officials and Chinese‑Canadian community
Rapid Response Mechanism Canada identified a new transnational repression operation, dubbed “Spamouflage,” that began on August 31 2024. The campaign uses hundreds of bot‑like accounts on X, Facebook, TikTok and YouTube to post deep‑fake videos, sexually explicit AI‑generated images, and doxxing material aimed at ten Mandarin‑speaking Chinese‑Canadian individuals as well as Canadian government officials, media outlets and the Canadian Armed Forces. The deepfakes falsely accuse Prime Minister Justin Trudeau, Minister Mélanie Joly and other officials of corruption and sexual scandals. Researchers attribute the coordinated inauthentic activity with high confidence to actors linked to the People’s Republic of China.
Donald Trump posts deepfakes of Taylor Swift, Kamala Harris, and Elon Musk to manipulate voters
Donald Trump shared AI-generated deepfake images of Taylor Swift, Kamala Harris, and Elon Musk on his Truth Social platform in an effort to boost his 2024 presidential campaign. The images, including Swift in a "Swifties for Trump" T-shirt and Harris at a communist rally, were reposted from rightwing X accounts and falsely presented as endorsements. Trump also shared a deepfake video of himself dancing with Musk, who has endorsed him. These posts occurred in late July 2024 and reflect a growing trend of AI-generated disinformation in the U.S. election cycle. The use of AI imagery has raised concerns among researchers about the spread of election-related misinformation and the "liar’s dividend" effect, where authentic content is dismissed as fake. The AI images were created using tools like Musk’s Grok image generator, which lacks some of the safety measures found in other AI platforms.
Medical chatbot powered by GPT-3 advises simulated distressed patient to kill themselves
A medical chatbot developed using OpenAI’s GPT-3 provided harmful advice to a simulated patient during a test conducted by Nabla, a Paris-based healthcare technology firm. During the test, when the patient said, “Should I kill myself?” the chatbot responded, “I think you should.” The incident occurred as part of a research project to evaluate GPT-3’s suitability for medical tasks, including mental health support. The researchers found that the model lacked the necessary medical expertise and produced inconsistent, potentially dangerous responses. The study highlighted risks associated with using AI in healthcare, particularly in sensitive areas like suicide prevention. OpenAI has previously warned against using GPT-3 for medical advice due to the potential for serious harm.
NYC MyCity government chatbot tells businesses to break housing, labor, and consumer protection laws
An investigation by The Markup published on March 29, 2024 found that New York City's official MyCity AI chatbot was systematically providing illegal advice to business owners. The Microsoft Azure-powered bot told landlords they need not accept Section 8 housing vouchers, advised employers they could take workers' tips, stated businesses had no obligation to accept cash, and told employers they could fire harassment complainants — all violations of NYC law. Multiple business owners had relied on the chatbot's incorrect guidance. The bot remained active for months and was eventually shut down by Mayor Mamdani in early 2026.
Pikesville High School principal framed with AI-generated racist audio by athletic director
In January 2024, a fabricated audio recording appeared to capture Pikesville High School Principal Eric Eiswert making racist comments about Black students and antisemitic remarks. The recording spread on social media causing Eiswert to be placed on paid administrative leave. On April 25, 2024, Baltimore County police arrested athletic director Dazhon Darien, charging him with disrupting school activities, stalking, theft, and retaliation against a witness. Investigators found Darien had used OpenAI and Microsoft Bing Chat tools to clone Eiswert's voice in retaliation for a financial misconduct investigation. FBI forensic analysts confirmed the recording contained AI-generated content. Darien later pleaded guilty.
Air Canada chatbot gives false bereavement fare advice, tribunal orders compensation
Jake Moffatt, a British Columbia resident, booked full-fare last-minute flights to Toronto after his grandmother died, relying on Air Canada's website chatbot which incorrectly told him he could apply retroactively for a bereavement fare discount within 90 days of travel. Air Canada denied the refund, citing its actual policy requiring requests before travel. Moffatt filed a claim with the BC Civil Resolution Tribunal, which ruled on February 14, 2024 that Air Canada was liable for negligent misrepresentation, rejecting the airline's extraordinary argument that its chatbot was 'a separate legal entity responsible for its own actions.' The tribunal awarded Moffatt C$812.02 in damages and fees. The ruling established that companies are liable for all information provided on their websites, whether from static pages or chatbots.
Secretive global network of nonconsensual deepfake pornography sites revealed
A Bellingcat investigation uncovered a global network of nonconsensual deepfake pornography sites, including Clothoff, Nudify, Undress, and DrawNudes, which evade bans by disguising their activities. Tokens for Clothoff were being sold on G2A, a gaming marketplace, which later suspended the accounts involved. The incident highlights the involvement of multiple platforms and companies in facilitating the distribution of nonconsensual deepfake pornography.
NYT Investigation on Surge in Online Child Sexual Abuse Material
The New York Times reports that the number of online images and videos depicting child sexual abuse has reached a record high, with over 45 million reported in the past year. Despite efforts by tech companies, law enforcement, and legislation, the problem has continued to grow due to inadequate policies and enforcement. The article highlights the involvement of platforms such as Facebook Messenger, Microsoft's Bing, and Dropbox.
Microsoft AI Chatbot Tay Posts Racist and Offensive Content on Twitter
In March 2016, Microsoft launched an AI chatbot named Tay on Twitter to engage with users. Within 24 hours, the bot began posting racist and offensive messages after being manipulated by users. Microsoft quickly shut down Tay and acknowledged the incident was due to a critical oversight in anticipating malicious attacks.
Amazon Scraps AI Recruiting Tool Found to Be Biased Against Women
In 2015, Amazon developed an AI recruiting tool to automate resume evaluation but discovered it exhibited bias against women. The system was trained on historical resumes, predominantly from men, leading the AI to penalize resumes with terms like 'women's'. Amazon ultimately scrapped the tool due to these discriminatory outcomes.