Institute for Strategic Dialogue
Institute for Strategic Dialogue has been named in 2 documented digital harm incidents. The most common harm domain is Misinfo & Disinfo.
Documented Incidents
2Chinese "Spamouflage" Influence Operation Uses Fake U.S. Voter Personas
Researchers at Graphika identified a Chinese state‑linked influence campaign, dubbed “Spamouflage,” that created a network of fake social‑media accounts impersonating U.S. voters, soldiers and a news outlet. The operation posted divisive content on X, TikTok, YouTube, Instagram and Facebook ahead of the 2024 presidential election, targeting topics such as reproductive rights, homelessness, Ukraine and Israel. Meta linked the network to Chinese law‑enforcement, while TikTok removed one of the accounts for policy violations after a video mocking President Biden amassed 1.5 million views. The campaign illustrates China’s use of deceptive online behavior to portray the United States as politically unstable.
AI-generated disinformation disrupts Bangladesh's 2024 general election campaign
A report by *The Daily Star* and cited in the *Financial Times* highlights the use of AI-generated disinformation in Bangladesh ahead of its January 2024 elections. Pro-government outlets and influencers have used AI tools like HeyGen to create fake news clips and deepfake videos targeting both the ruling party and opposition Bangladesh Nationalist Party (BNP). Examples include an AI-generated news anchor criticizing the U.S. and a deepfake video falsely showing an opposition leader downplaying support for Gazans. The disinformation is spreading on platforms like X and Facebook, with Meta removing some content after being contacted by the *Financial Times*. Experts warn that the lack of regulation and the potential for bad actors to falsely claim content is AI-generated could further erode public trust in information. The issue is part of a growing global concern about AI's role in elections, particularly in smaller markets that may be overlooked by major tech companies.