Platform Policy Deep Dive 20 MIN READ

Meta Ends Fact-Checking: The $100 Million Retreat

How Zuckerberg's January 2025 announcement dismantled a global verification network spanning 90+ organizations across 119 countries

TL;DR

CONTEXT NEEDED

On January 7, 2025, Meta CEO Mark Zuckerberg announced the end of third-party fact-checking on Facebook and Instagram, replacing it with X-style "Community Notes." The decision dismantles a $100 million global network of 90+ fact-checking organizations across 119 countries. While Zuckerberg frames this as restoring "free expression," the timing—three weeks before Trump's inauguration, following a $1 million donation to his inaugural fund—raises questions about political motivation. Research shows Community Notes can reduce engagement with misinformation by 46% when applied, but only 7.4% of election-related notes ever reach users.

Executive Summary

Meta's decision to end third-party fact-checking represents the most significant retreat from content moderation by a major platform since Elon Musk's acquisition of Twitter. The move eliminates professional oversight of misinformation on platforms serving 3 billion users, replacing it with a crowdsourced system that research shows is effective when notes appear—but structurally disadvantaged against viral, polarizing content. The political context cannot be ignored: Meta appointed Republican Joel Kaplan as global policy chief on January 2, ended fact-checking on January 7, and Trump took office on January 20. This analysis examines what the program accomplished, what replaces it, and what the research says about Community Notes' effectiveness.

January 2025: The Five-Day Pivot
Key events from Meta's policy shift, Jan 2-7, 2025

The Announcement

On January 7, 2025, Mark Zuckerberg released a video announcing sweeping changes to Meta's content moderation approach. "Fact checkers have just been too politically biased and have destroyed more trust than they've created," he declared, echoing language long used by Donald Trump and conservative critics of social media. [1]

The announcement came with several major policy shifts. Meta would replace its 10 U.S. fact-checking partners—including FactCheck.org, PolitiFact, Reuters, and the Associated Press—with a "Community Notes" system modeled on X's approach. The company would also relax restrictions on topics including immigration and gender identity, and relocate its trust and safety team from California to Texas. [2]

Perhaps most tellingly, Meta's Chief Global Affairs Officer Joel Kaplan—a former Bush administration deputy chief of staff appointed just five days earlier—stated the company's previous approach "has gone too far" and resulted in "too many mistakes, frustrating our users." [2]

What the Fact-Checking Program Actually Did

Meta's third-party fact-checking initiative launched in December 2016 following criticism of the platform's role in election misinformation. The program operated through a structured partnership model with certified organizations. [4]

Critically, fact-checkers never had the power to remove content. As Meta itself emphasized: "Fact-checkers do not remove content, accounts or Pages from Facebook." When misinformation was rated false, Meta reduced its distribution and applied warning labels linking to fact-checkers' articles. The company also used AI to match similar claims across platforms. [4]

Neil Brown, president of the Poynter Institute, pushed back on Zuckerberg's characterization: "Facts are not censorship. Fact-checkers never censored anything. And Meta always held the cards." [4]

Global Fact-Checking Network
Meta's fact-checking program spanned 119 countries

The Global Dimension

What makes Meta's decision particularly consequential is its global scale. The company had allocated approximately $100 million to support certified fact-checking organizations—more than 90 organizations working in over 60 languages across 119 countries. [7]

While Meta stated the rollback is "starting in the U.S." and does not apply to other countries "at this time," global partners are scrambling. Several organizations were launched specifically to work with Meta, and as one fact-checker told Reuters Institute: "The end of the program will be the same [as] the end of fact checking in some regions." [9]

In countries like India, where the government has more propaganda firepower than civil society, critics warn the change will make "platforms easier to abuse for coordinated actors who want to spread disinformation for political gain." [8]

Some organizations report that Meta funding represents 20-30% of their budget, though many declined to specify exact figures due to non-disclosure agreements. Angie Drobnic Holan, director of the International Fact Checking Network, warned: "We'll see fewer fact-checking reports published and fewer fact checkers working." [7]

The Political Context

The timing of Meta's announcement cannot be separated from its political context. On January 2, 2025—five days before the fact-checking announcement—Meta appointed Joel Kaplan as its new Chief Global Affairs Officer, replacing Nick Clegg. [10]

Kaplan served as White House Deputy Chief of Staff under George W. Bush. Within Meta, he has been described as "a strong conservative voice" who "helped place conservatives in key positions" and "advocated for the interests of right-wing websites Breitbart News and The Daily Caller." [11]

The broader pattern includes: Meta donating $1 million to Trump's inaugural fund, Zuckerberg visiting Trump at Mar-a-Lago in December, and the addition of UFC CEO Dana White—a close Trump ally—to Meta's board. [1]

When asked about the changes, Trump said: "I think they've come a long way." Asked if Zuckerberg was "directly responding to the threats that you have made to him in the past," Trump responded: "Probably." [1]

Community Notes: When Applied
Engagement drops after notes appear (UW study, 2025)

Does Community Notes Work?

The research on Community Notes presents a nuanced picture. A September 2025 University of Washington study published in PNAS found that when Community Notes are applied, they significantly reduce engagement: 46% fewer reposts, 44% fewer likes, 22% fewer replies, and 14% fewer views. [6]

The same study found that posts with public correction notes were 32% more likely to be deleted by their authors, suggesting the system does create accountability pressure. Verified users (blue checkmarks) were "particularly quick to delete their posts" when they received Community Notes. [6]

However, the system has critical structural limitations. During the 2024 U.S. presidential election, researchers found that misleading posts without notes spread 13 times faster than those with Community Notes attached. The reason: the rating algorithm requires agreement across politically diverse users, making it harder for notes on polarizing content to achieve "helpful" status. [12]

A study of election-related Community Notes found that only 7.4% of proposed notes were ever displayed to users—meaning the vast majority of identified misinformation received no visible correction. [3]

Fact-Checking Partner Revenue Impact
Meta funding as % of organization budgets

What the Critics Say

FactCheck.org, one of the terminated partners, issued a direct response: "Our work isn't about censorship. We provide accurate information to help social media users." The organization emphasized that despite losing the partnership, "Fact-checking is public service journalism, and we're more convinced than ever that it's needed." [5]

FTC Chair Lina Khan expressed concern about potential "sweetheart deals" between Meta and the incoming administration, suggesting the policy changes may be motivated by regulatory considerations rather than genuine concern for free speech. [1]

Claire Wardle from Cornell University warned that without professional fact-checkers, "there will be an incentive for those who want to spread that kind of content." [3]

The Union of Concerned Scientists described the decision as "raising risks of disinformation to democracy," noting that "Meta's policy change eliminating fact-checking in the US represents a dangerous shift that could increase online hate, polarization, and the potential for real-world violence." [14]

The "Fewer Mistakes" Defense

Meta has defended its decision by pointing to error rates. In its official announcement, the company acknowledged that "one to two out of every 10" content removals in December 2024 "may have been mistakes." By May 2025, Meta reported a "roughly 50% reduction in enforcement mistakes" from Q4 2024 to Q1 2025 under the new system. [2]

However, critics note that this framing conflates two different issues. The 10-20% error rate Meta cites was for automated content removal—not fact-checking. Fact-checkers never removed content; they only applied labels. The comparison suggests either confusion or deliberate misdirection about what the fact-checking program actually did. [4]

Bottom Line

Meta's decision to end third-party fact-checking is not a neutral policy adjustment—it's a calculated retreat from accountability, timed to align with a political transition. The research is clear: Community Notes work when they appear, but they structurally fail on polarizing content that most needs correction. Meanwhile, 90+ fact-checking organizations that served as a global bulwark against misinformation now face an uncertain future. Zuckerberg frames this as "restoring free expression." Trump calls it Meta having "come a long way." The 3 billion users on Facebook and Instagram will discover what it actually means.