Investigation
Three companies control 88% of enterprise AI — more concentrated than legacy media ever was. OpenAI's lobbying increased 1,050% since 2023, hitting $2.99M in 2025. Anthropic followed at $3.13M (+1,018%). Sam Altman donated $1M to Trump's inauguration, then joined the White House Stargate announcement ($500B AI infrastructure). Greg Brockman gave $25M to MAGA Inc. super PAC. Anthropic countered with $20M to pro-regulation PAC. Deepfakes surged from 500K (2023) to 8M files (2025), flooding elections in Ireland, Moldova, India, and the 2026 U.S. midterms. China uses AI to generate state propaganda at scale. The U.S. has no federal AI law — Trump's December EO seeks to preempt state regulations. The Musk-Altman power struggle intensified with a $97.4B unsolicited bid. This is regulatory capture by design.
The concentration of artificial intelligence development among a handful of corporations, backed by the world's wealthiest individuals and deepening ties to political power, mirrors — and in some ways surpasses — the media monopolies of the 20th century. This investigation documents how AI companies are spending record sums on lobbying, forging unprecedented relationships with the Trump administration, and competing with each other for political influence while governments worldwide struggle to regulate a technology that could reshape information, elections, and democratic discourse itself. Three companies control 88% of enterprise AI. OpenAI and Anthropic alone captured 14% of all global venture capital. The "Super Six" tech giants will spend nearly $700 billion on AI infrastructure in 2026. Deepfakes have increased 900% annually, with human detection rates at just 24.5%. This is the story of how AI became the new media monopoly — and what happens when the most powerful information technology in history falls under the control of a few billionaires and the governments courting them.
1. The New Media Monopoly: Concentration Beyond Precedent
In 1983, media critic Ben Bagdikian published The Media Monopoly, documenting that 50 corporations controlled half of all U.S. media. By 2011, that number had collapsed to just six companies controlling 90% of all American media [22].
Today, artificial intelligence has surpassed even that level of concentration. As of 2025, three companies — Anthropic (40%), OpenAI (27%), and Google (21%) — control 88% of enterprise LLM API usage [1].
In the consumer market, ChatGPT holds 68% of AI chatbot usage, down from 87.2% one year ago, with Google Gemini surging to 18.2% as the only meaningful competitor. Combined, these two platforms account for 86.2% of consumer AI interactions [1].
The financial scale is staggering. OpenAI and Anthropic alone captured 14% of all global venture capital across every sector in 2025 — $202.3 billion in total AI investment represented 50% of all VC deployed worldwide. OpenAI reached a $500 billion valuation in October 2025. Anthropic's revenue grew from $87 million annualized in January 2024 to $7 billion by October 2025 [11].
The "Super Six" tech giants (Nvidia, Microsoft, Apple, Alphabet, Amazon, Meta) are expected to spend nearly $700 billion combined in 2026 on AI infrastructure — more than the GDP of Switzerland [1].
William Randolph Hearst used his newspaper monopoly to influence policy and seek political office in the early 20th century. Today, Sam Altman uses OpenAI's dominance to advise the White House on AI policy while lobbying against regulation. The difference: Hearst controlled the printing presses. Altman controls the machines that generate the content itself.
2. The Lobbying Explosion: Record-Breaking Political Spending
AI companies have dramatically escalated their lobbying operations over the past three years, with expenditures increasing at rates far exceeding any other industry sector.
| Company | 2023 | 2024 | 2025 | % Change |
|---|---|---|---|---|
| OpenAI | $260K | $1.76M | $2.99M | +1,050% |
| Anthropic | $280K | $720K | $3.13M | +1,018% |
| Nvidia | ~$500K | ~$1.2M | ~$5M | +900% |
| Meta | $24M | $24.3M | $26.29M | +9.5% |
| Cohere | $70K | $230K | N/A | +229% |
The number of companies lobbying on AI issues jumped from 458 in 2023 to 648 in 2024 — a 41% increase in a single year [4].
Nvidia's lobbying acceleration has been particularly dramatic. In Q3 2025 alone, the chip giant spent $1.9 million — tripling its Q2 expenditure of $620,000. This surge coincides with escalating battles over AI chip export controls to China, with Nvidia deploying former congressional technology policy staffers to lobby against restrictions [23].
"What we're witnessing is a systematic capture of the regulatory process before meaningful regulation even exists," explained a technology policy analyst at OpenSecrets. "These companies are spending unprecedented sums to ensure the rules are written by the people they hire." [4]
The lobbying extends beyond federal efforts. AI companies are hiring former congressional staffers, state legislators, and regulatory officials to shape AI policy at every level of government. The revolving door between AI companies and government has become a well-oiled machine [9].
3. Political Donations: Buying Access to Power
Beyond lobbying, AI executives and companies have poured unprecedented sums into direct political contributions, super PACs, and inaugural funds — creating a web of financial influence that spans both major parties and all branches of government.
Trump Inauguration Donations (January 2025)
When Donald Trump was inaugurated for his second term in January 2025, tech companies and AI executives donated a record-shattering $245 million to his inaugural fund — nearly three times Trump's 2017 inaugural record [6].
| Donor | Amount | Notes |
|---|---|---|
| Sam Altman (OpenAI CEO) | $1 million | Personal donation |
| Meta | $1 million | Corporate donation |
| Amazon | $2 million | $1M cash + $1M in-kind (Prime streaming) |
| Microsoft | $1 million | 2x its Biden inauguration donation |
The very next day, Altman stood in the White House Roosevelt Room alongside Oracle's Larry Ellison and SoftBank's Masayoshi Son to announce the Stargate project — a $500 billion AI infrastructure initiative [10].
Super PAC Arms Race
The battle for political influence extends far beyond inauguration donations. AI companies and their executives have established dueling super PACs designed to shape the 2026 midterm elections and beyond.
| Donor | PAC | Amount | Agenda |
|---|---|---|---|
| Greg Brockman (OpenAI co-founder) | MAGA Inc. | $25M | Pro-Trump super PAC |
| a16z (Andreessen Horowitz) | Leading the Future | $25M | Pro-AI industry candidates |
| Leading the Future (Total H2 2025) | — | $125M | 2026 midterms (light regulation) |
| Anthropic | Public First Action | $20M | Pro-regulation candidates (both parties) |
Anthropic's $20 million donation to Public First Action in February 2026 was explicitly framed as a counter to OpenAI's political spending. The company stated it was supporting "30-50 candidates from both parties who back AI guardrails" to prevent OpenAI from "amassing too much political power" [7].
The Leading the Future PAC, backed by prominent OpenAI figures and venture capitalists, raised $125 million in the second half of 2025 and plans to support candidates who favor lighter AI regulation in the 2026 midterms [8].
Anthropic's "pro-regulation" stance and OpenAI's "pro-innovation" positioning create the illusion of a principled debate. In reality, both companies oppose the kind of strict, enforceable safety testing and liability frameworks that independent experts recommend. This is not a fight between regulation and innovation — it's a fight over which company gets to write the rules it will follow.
4. The Revolving Door: Tech Executives in Government
The Trump administration's 2025 U.S. Tech Force initiative represents the formalization of the revolving door between AI companies and the federal government. Spearheaded by former venture capitalist Scott Kupor (Office of Personnel Management), the program aims to recruit 1,000 engineers and AI specialists for federal infrastructure [9].
Partner companies include OpenAI, Palantir, xAI, and Nvidia — the same corporations developing the AI systems that will be deployed in government operations and shaping the policies that regulate them.
"This is regulatory capture by design," said a researcher at the RAND Corporation. "The companies building AI are now sending personnel to implement federal AI policies. There is no separation between the regulated and the regulator." [9]
Sam Altman has taken a position as a senior advisor to the White House on AI issues, replacing Elon Musk's prior informal advisory role. Altman attended the Stargate announcement in the Roosevelt Room the day after Trump's inauguration, standing alongside Oracle's Larry Ellison and SoftBank's Masayoshi Son [10].
AI companies are also hiring lobbying firms staffed with former congressional technology policy staffers to influence AI legislation at the committee level. Multiple former government officials have moved to AI companies, and vice versa, creating a well-oiled revolving door that ensures industry voices dominate policy discussions [9].
OpenAI's strategy mirrors the financial sector's "too big to fail" playbook — embedding itself so deeply in government infrastructure (Stargate, White House advisory roles, federal AI initiatives) that meaningful regulation becomes politically impossible. When the people writing AI policy work for AI companies, the rules will protect the companies, not the public.
5. The Stargate Project: Government-Corporate AI Fusion
On January 21, 2025 — the day after Trump's inauguration — the President stood in the White House Roosevelt Room to announce what he called "the largest AI infrastructure project in history": the Stargate project, a $500 billion joint venture between OpenAI, SoftBank, Oracle, and MGX (an Abu Dhabi sovereign wealth fund) [10].
Stargate by the Numbers
- $100 billion: Initial commitment in 2025
- $500 billion: Total investment through 2029
- 7 gigawatts: Planned data center capacity across 5+ sites
- Operational responsibility: OpenAI
- Financial responsibility: SoftBank
The project represents an unprecedented fusion of government endorsement and private capital. By announcing it from the White House the day after his inauguration, Trump effectively branded the federal government as a partner in OpenAI's expansion.
The scale is staggering. Seven gigawatts of data center capacity would consume more electricity than the entire nation of Ireland. The $500 billion total investment exceeds the GDP of Poland [10].
Critics have noted the timing: Sam Altman donated $1 million to Trump's inauguration, then appeared at the White House announcement less than 24 hours after the inauguration ceremony. The message was clear: access to the President is for sale, and AI companies have the cash to buy it.
Who Funds OpenAI?
As of October 2025, OpenAI operates as a Public Benefit Corporation with the following ownership structure [11]:
- Microsoft: 27% stake
- OpenAI Foundation: 26% stake
- Employees and other investors: 47%
OpenAI raised $40 billion in April 2025 at a $300 billion valuation (later reaching $500 billion by October). Anthropic raised $13 billion in Series F funding at a $183 billion valuation in September 2025, backed by Amazon ($8B), Google ($2B), Microsoft, Nvidia, and others [11].
The concentration of capital in AI has reached levels never before seen in technology investment. OpenAI and Anthropic alone captured 14% of all global venture capital across every sector in 2025 [11].
6. Deepfakes and Synthetic Content as Political Weapons
While AI companies lobby for lighter regulation and forge partnerships with the White House, the tools they have unleashed are already being weaponized to manipulate elections, spread disinformation, and undermine democratic institutions worldwide.
The Scale of the Deepfake Threat
Deepfake files surged from 500,000 in 2023 to 8 million in 2025 — a 900% annual growth rate. Deepfake attacks now occur at a rate of one every five minutes. Human detection rates for high-quality deepfake video have collapsed to just 24.5% [17].
Europol estimates that 90% of online content may be synthetically generated by 2026. AI fraud losses are projected to climb from $12.3 billion in 2023 to $40 billion by 2027. Fraud attempts using deepfakes have increased 2,137% over three years [17].
2025-2026 Election Interference
AI-generated disinformation has already influenced elections across the globe [18][19]:
- Ireland: A library of 120+ deepfake images of Irish politicians was uploaded ahead of the October presidential election.
- Moldova: Russian-funded network used ChatGPT to generate pro-Kremlin propaganda, paying people to post it on social media.
- India/Indonesia/Mexico: AI used to create defamatory images of female candidates, amplifying misogynistic stereotypes.
- United States: A Virginia candidate debated an AI-generated version of his opponent. Experts warn of a flood of AI deepfakes in the 2026 midterms.
Twenty-six U.S. states have enacted laws regulating political deepfakes (banning or requiring disclosure). However, the Federal Election Commission remains divided along partisan lines and has failed to establish clear AI guidelines for campaign advertising [18].
"The objective of deepfakes is not only deception, but the gradual erosion of public trust in all political information," according to election security experts at the Brennan Center for Justice. When voters cannot trust what they see or hear, democracy itself becomes impossible [19].
7. Regulatory Divergence: U.S. vs. EU vs. China
Governments worldwide are taking radically different approaches to AI regulation — creating a fragmented global landscape where companies can shop for the most permissive jurisdiction and where democratic safeguards exist only in certain regions.
| Dimension | United States | European Union | China |
|---|---|---|---|
| Approach | Market-driven, deregulatory | Risk-based, comprehensive law | State-centric control |
| Primary Goal | Innovation/dominance | Rights/safety | National security/social stability |
| Regulation Status | Fragmented state laws; federal preemption push | EU AI Act in force | Strict alignment mandates |
| Content Control | Corporate discretion | Transparency requirements | State propaganda integration |
United States: Deregulation and Federal Preemption
In January 2025, Trump revoked Biden's AI safety executive order (EO 14110), calling it an impediment to innovation. In May 2025, Sam Altman testified to the Senate, calling safety testing requirements "disastrous" for the industry [12].
In December 2025, Trump signed an executive order establishing a "National Policy Framework for AI" that seeks to preempt state AI laws — directing an "AI Litigation Task Force" to challenge state regulations. Thirty-eight states adopted roughly 100 AI-related measures in 2025, but the federal preemption push threatens to override them [12].
Congress has not passed a comprehensive federal AI law.
European Union: Comprehensive Regulation
The EU AI Act is the world's only comprehensive AI law. It uses a risk-based classification system (prohibited, high-risk, limited, minimal). General-purpose AI model rules took effect in August 2025, requiring risk mitigation, transparency, and copyright compliance.
Despite the EU Act's existence, researchers noted that AI remains "less regulated than sandwiches" — a reference to the gap between theoretical frameworks and enforcement capacity [16].
China: State Control Integration
All major Chinese AI chatbots strictly align with Beijing's official narratives — generating verbatim government rhetoric on sensitive topics like Tiananmen Square, Uyghur persecution, and Taiwan [13].
The GoLaxy system, developed by Chinese state-linked entities, is an AI platform that ingests social media data, maps political relationships, and generates coordinated propaganda campaigns. Between December 2024 and March 2025, China-linked operations created 11 fake AI-generated news websites in English, French, Spanish, and Vietnamese [14].
In March 2025, China issued new regulations requiring all AI-generated content to be clearly labeled (effective September 2025). AI enables real-time censorship through automated moderation, sentiment analysis, and recommendation algorithms that downrank criticism [13].
8. The Musk-Altman Rivalry and Power Consolidation
In February 2025, Elon Musk led a consortium that submitted a $97.4 billion unsolicited bid to buy the nonprofit controlling OpenAI. Sam Altman rejected the offer and counter-offered to buy X (formerly Twitter) for $9.74 billion [20].
The bid underscored the power struggle between two of the world's wealthiest and most influential tech figures — both of whom co-founded OpenAI in 2015 before Musk departed in 2018 citing disagreements over direction and control.
xAI: Musk's Counter-Monopoly
Musk launched xAI in 2023 as an OpenAI rival. In February 2026, SpaceX acquired xAI, consolidating Musk's AI ambitions under his aerospace company. The stated goal: to build more "transparent" AI. In practice, xAI's Grok chatbot has demonstrated measurable political bias — producing a rightward shift on Political Compass testing and generating antisemitic outputs praising Hitler [15].
Musk has also sued OpenAI for antitrust violations and to block its for-profit conversion, arguing that the company has strayed from its nonprofit mission [21].
Experts warn that Musk's "control-driven approach could centralize AI power in his hands" — creating a different monopoly rather than preventing one. The Musk-Altman rivalry is not about AI safety versus innovation. It's about which billionaire controls the most powerful technology in human history [21].
The public framing presents Musk as the transparency advocate and Altman as the corporate consolidator. In reality, both are competing for monopoly control. Musk's xAI has weak safety guardrails and ideological bias. Altman's OpenAI lobbied against state safety laws. Neither represents the public interest — both represent the consolidation of AI power in the hands of unelected billionaires.
9. Why This Matters Now
The consolidation of AI power is not a theoretical concern. It is happening right now, at a speed and scale that makes the media monopolies of the 20th century look glacial by comparison.
Three companies control 88% of enterprise AI. They are spending record sums to influence government. They are sending executives to write federal AI policy. They are donating millions to political campaigns and super PACs. They are building the infrastructure to generate 90% of online content. And they are doing all of this while governments struggle to regulate a technology that could reshape information, elections, and democracy itself.
The 2026 midterms will be the first major test of whether democratic institutions can withstand AI-driven disinformation at scale. Early signs are not encouraging. Deepfakes are flooding social media. The Federal Election Commission is paralyzed. Twenty-six states have enacted laws, but the Trump administration is pushing to preempt them with a federal framework designed to protect industry, not voters [18].
"AI is less regulated than sandwiches," noted the Future of Life Institute in its 2025 AI Safety Index [16].
Anthropic CEO Dario Amodei said he is "deeply uncomfortable" with companies regulating themselves on AI safety. Yet even Anthropic — which positions itself as the safety-conscious alternative — lobbied against California's SB1047 safety bill, arguing that "the nascency of the science of AI evaluations is a reason not to legislate too prescriptively, too early" [24].
The message is clear: AI companies want to regulate themselves, and they are spending billions to ensure no one else can.
10. What Can Be Done
The concentration of AI power is not inevitable. It is the result of policy choices — choices that can be reversed. Here are actionable steps for policymakers, journalists, and citizens:
For Policymakers
- Pass comprehensive federal AI safety legislation with enforceable testing requirements, liability frameworks, and transparency mandates.
- Resist federal preemption of state AI laws. States are laboratories of democracy — allow them to experiment with AI regulation.
- Close the revolving door. Implement strict cooling-off periods for government officials moving to AI companies and vice versa.
- Require political deepfake disclosure. Mandate clear labeling of AI-generated content in campaign advertising.
- Fund independent AI safety research. Do not rely on industry-funded studies to set policy.
For Journalists
- Follow the money. Report on lobbying, campaign donations, and revolving door hires as aggressively as you would for any other industry.
- Challenge the "innovation vs. safety" framing. This is a false dichotomy designed to justify deregulation.
- Investigate AI-generated content. Use detection tools, verify sources, and educate the public on synthetic media risks.
- Spotlight conflicts of interest. When AI executives advise the White House, report it as the revolving door capture it is.
For Citizens
- Demand transparency. Ask AI companies to disclose training data, safety testing results, and political spending.
- Support state AI regulations. Contact your state legislators to resist federal preemption efforts.
- Verify before sharing. Treat AI-generated content with skepticism. Use reverse image search and fact-checking tools.
- Diversify your information sources. Do not rely on a single AI platform for news or information.
- Advocate for antitrust enforcement. Three companies controlling 88% of AI is a monopoly problem, not an innovation success story.
The next 12 months will determine whether AI remains under democratic control or becomes a tool of concentrated corporate and political power. The lobbying has already happened. The donations have been made. The revolving door is spinning. The only question is whether the public will demand accountability before it's too late.