A 9185 — Relates to falsely reporting an incident through the use of artificial intelligence
This bill addresses the use of artificial intelligence to falsely report incidents, aiming to prevent AI-generated fraud or deceptive practices. It is intended to criminalize the misuse of AI in creating false incident reports. The legislation targets fraud and financial harm caused by synthetic media or AI-generated content used to deceive law enforcement or emergency services.
Linked Incidents
5Incidents this policy has been directly linked to
69-year-old grandmother nearly scammed by AI voice cloning impersonating grandson under arrest
French woman loses 1.3 billion won to AI-generated Brad Pitt impersonation scam in South Korea
Deltona man loses $900 to fake Tesla Cybertruck sweepstakes scam on Facebook
Crypto executive loses over $100,000 in Zoom call scam by Elusive Comet group
African billionaire's X account hijacked to defraud victims of $1.48 million
Related Incidents
Same harm domain, actors and location may differ
Retired Army officer loses ₹1 crore to deepfake investment scam using AI-generated Modi and Sitharaman videos
Middle-aged couple in Gujarat loses $300 to AI voice cloning fraud targeting their son
Indiana retiree loses $10,000 to pig-butchering scam via Facebook and encrypted messaging apps
Finance director loses $499,000 to deepfake Zoom call impersonating senior executives in Singapore
78-year-old Birmingham widow loses $11,000 to AI voice cloning scam
Related Legislation
Other policies covering the same harm domain