SB 5356 — Establishing Guidelines For Government Procurement And Use Of Automated Decision Systems In Order To Protect Consumers, Improve Transparency, And Create More Market Predictability
SB 53 regulates the development and deployment of large artificial intelligence models in California. The law requires transparency and accountability measures for developers of large AI models, including disclosure of training data sources and mitigation of algorithmic bias. It aims to address harms related to algorithmic discrimination and privacy risks associated with AI systems.
Linked Incidents
3Incidents this policy has been directly linked to
13-year-old student exposed to historically inaccurate AI-generated images via Google Gemini, leading to system-wide pause for retooling
Black man wrongfully arrested after facial recognition misidentification leading to $8M settlement
Adult patron wrongfully arrested after facial recognition misidentification at Peppermill Casino
Related Incidents
Same harm domain, actors and location may differ
Orlando man wrongfully arrested after facial recognition misidentification by Orlando police
50-year-old woman wrongfully arrested after facial recognition misidentification in Tennessee leading to six-month custody without compensation
26-year-old South Asian software engineer wrongfully arrested after facial recognition misidentification in Milton Keynes theft case
Innocent South Asian man wrongfully arrested after AI facial recognition misidentification in Milton Keynes, leading to legal action and criticism of racial bias
Related Legislation
Other policies covering the same harm domain