All actors
NGOUnited KingdomEst. 2006Website

Institute for Strategic Dialogue

Institute for Strategic Dialogue has been named in 2 documented digital harm incidents. The most common harm domain is Misinfo & Disinfo.

2
Incidents
0
Fatalities
0
Minors involved
Financial harm

Documented Incidents

2
Sep 3, 2024·United States

Chinese "Spamouflage" Influence Operation Uses Fake U.S. Voter Personas

Researchers at Graphika identified a Chinese state‑linked influence campaign, dubbed “Spamouflage,” that created a network of fake social‑media accounts impersonating U.S. voters, soldiers and a news outlet. The operation posted divisive content on X, TikTok, YouTube, Instagram and Facebook ahead of the 2024 presidential election, targeting topics such as reproductive rights, homelessness, Ukraine and Israel. Meta linked the network to Chinese law‑enforcement, while TikTok removed one of the accounts for policy violations after a video mocking President Biden amassed 1.5 million views. The campaign illustrates China’s use of deceptive online behavior to portray the United States as politically unstable.

Misinfo & Disinfo
Jan 1, 2024·Bangladesh

AI-generated disinformation disrupts Bangladesh's 2024 general election campaign

A report by *The Daily Star* and cited in the *Financial Times* highlights the use of AI-generated disinformation in Bangladesh ahead of its January 2024 elections. Pro-government outlets and influencers have used AI tools like HeyGen to create fake news clips and deepfake videos targeting both the ruling party and opposition Bangladesh Nationalist Party (BNP). Examples include an AI-generated news anchor criticizing the U.S. and a deepfake video falsely showing an opposition leader downplaying support for Gazans. The disinformation is spreading on platforms like X and Facebook, with Meta removing some content after being contacted by the *Financial Times*. Experts warn that the lack of regulation and the potential for bad actors to falsely claim content is AI-generated could further erode public trust in information. The issue is part of a growing global concern about AI's role in elections, particularly in smaller markets that may be overlooked by major tech companies.

Misinfo & DisinfoDisinformation

Linked Legislation

4
Protect Elections from Deceptive AI Act — 119th Congress (S.1213 / HR 5272)
United States
HB 4191 — Relating To Requirements Imposed On Social Media Companies To Prevent Corruption And Provide Transparency Of Election-Related Content Made Available On Social Media Websites
West Virginia
SB 816 — An Act Relating To Elections -- Deceptive And Fraudulent Synthetic Media In Election Communications
Rhode Island
HB 5872 — An Act Relating To Elections -- Deceptive And Fraudulent Synthetic Media In Election Communications
Rhode Island

By Harm Domain

Misinfo & Disinfo2