Best on Desktop — Interactive network graphs and full analysis work best on larger screens.
Active Threat Analysis

Spamouflage Network
Influence Operation Analysis

Real-time visualization of China's largest known influence operation targeting Western democracies. Data compiled from Meta, Graphika, and U.S. Department of Justice investigations.

0
Accounts Identified
0
Platforms Targeted
0
Languages Used
0
Active Since

Executive Summary

Spamouflage (also known as Dragonbridge or Storm 1376) is a Chinese state-linked influence operation that has been active since 2017, making it one of the longest-running coordinated disinformation campaigns targeting Western democracies.1 In August 2023, Meta identified it as "the largest known cross-platform covert influence operation in the world," removing over 7,700 Facebook accounts, 954 Pages, 15 Groups, and 15 Instagram accounts connected to the network.2 The operation spans more than 50 platforms including YouTube, TikTok, X (Twitter), and Reddit, employing a workforce that operates on clear shift patterns—with activity concentrated in mid-morning and early afternoon Beijing time.2

Meta's investigation revealed that operators "tried to conceal their identities and coordination" but ultimately traced the network to "individuals associated with Chinese law enforcement."2 The U.S. Department of Justice has linked Spamouflage to the Ministry of Public Security's "912 Special Project Working Group," resulting in indictments against 34 Chinese national police officers in April 2023 for transnational repression schemes.3 Internal evidence suggests the campaign operates under production quotas rather than strategic impact goals, with operators working from geographically dispersed locations while receiving centralized content directions and sharing proxy infrastructure.2

Ahead of the 2024 U.S. presidential election, Graphika researchers documented an evolution in Spamouflage tactics: the operation began deploying fake accounts impersonating American voters, soldiers, and news outlets across social media platforms.4 These accounts posted divisive content about abortion rights, homelessness, Ukraine, and Israel—"attempting to build their fake identities less around an individual party and more around the idea of U.S. patriotism or national pride," according to Graphika's chief intelligence officer.4 While most accounts achieved minimal engagement, one TikTok video mocking President Biden reached 1.5 million views before removal, demonstrating the operation's occasional ability to achieve viral reach through sheer volume.4

Despite its unprecedented scale, Spamouflage has been notably ineffective at generating authentic engagement. Ben Nimmo, Meta's global threat intelligence lead, observed: "There's nothing to suggest that anywhere across the internet this operation has really been systematically landing any kind of audience yet."1 Analysis indicates the campaign relies on quantity over quality, with posts featuring linguistic errors, copied-and-pasted comments creating "hundreds of identical posts," and follower counts inflated by purchasing spam pages from operators in Vietnam and Bangladesh.1 Nevertheless, the persistence and resources dedicated to the operation—spanning years and thousands of accounts—reflect a long-term strategic commitment by Chinese authorities to shape Western perceptions and sow internal divisions.24

Coordinated Inauthentic Behavior Cluster

Node Types
Coordinator Hub (High Centrality)
Amplifier Account
Automated Bot Account
Selected Node
Bot Account
Connections 0
Posts/Day 0
Coordination Score 0%
Active Since --
Platform Distribution
Accounts by social network
  • Facebook/Instagram 4,789
  • X (Twitter) 2,048
  • YouTube 981
  • TikTok 634
  • Other Platforms 252
Narrative Targets
Primary disinformation themes
Detection Indicators
Behavioral signatures
  • Beijing Working Hours 94%
  • Copy-Paste Content 87%
  • AI-Generated Avatars 72%
  • Coordinated Posting 68%
  • Burst Activity Pattern 61%

Operation Timeline

2017
Initial Detection
First instances of coordinated inauthentic behavior detected by researchers. Early operations focused on promoting positive narratives about China.
August 2019
Hong Kong Protests Campaign
Twitter, Facebook, and YouTube jointly remove thousands of accounts spreading disinformation about Hong Kong pro-democracy protesters.
September 2020
U.S. Election Interference Begins
Graphika identifies shift toward targeting U.S. domestic politics. Content amplifies division on racial justice, COVID-19, and political polarization.
December 2023
Meta's Largest Takedown
Meta removes 4,789 Facebook accounts and 954 Instagram accounts linked to Spamouflage - the largest single network takedown in the company's history.
September 2024
DOJ Seizure & Indictments
December 2024
Ongoing Operations
Despite takedowns, researchers continue to identify new clusters of accounts displaying Spamouflage-consistent behavior patterns across platforms.

Sources First