Meta Oversight Board advisory on community fact-checks and disinformation risks
Meta's Oversight Board issued an advisory warning that user-generated fact-checking via 'community notes' could pose significant human rights risks and contribute to tangible harms, particularly in repressive regimes, conflict zones, and during elections. The report highlights risks of disinformation amplification, lack of access to information, and potential manipulation by malicious actors using AI. It recommends Meta test for disinformation risks and ensure free media and civil society are present before rolling out the system globally.
Related Incidents
Same harm domain, actors and location may differ
South Florida man arrested after posting AI-generated deepfake video of deputy’s patrol car being broken into in Puerto Rico
YouTube AI auto-dub mistranslates 'Now, Jimmy Kimmel!' into 'Well now, kill him' in Japanese-language version of his own show
Retired Toronto banker Michael Mallinson falsely identified as Charlie Kirk's assassin in viral social media posts
Journalist Helen Brown's photo stolen and placed on Kremlin-linked fake news site to lend credibility to Ukraine disinformation
Pet care company targeted by AI-generated fake review bombing campaign on Google
Related Legislation
Other policies covering the same harm domain