Eagle Intelligence Reports

America’s AI Governance Gap

Eagle Intelligence Reports • January 29, 2026 •

Federal inaction on AI regulation has produced a patchwork of state AI laws, undermining U.S. competitiveness and threatening the country’s leadership of a potentially civilization-defining technology. Since 2024, more than 145 AI-related laws have been enacted at the state level, with Colorado, California, Texas, and Utah adopting divergent regulations.

State approaches to regulation vary significantly. Colorado implements risk-based consumer protections for individuals affected by AI. California targets specific sectors, regulating transparency and generative AI. Texas emphasizes industry self-regulation while forcing compliance with existing statutes, including biometric privacy laws. Utah mandates only limited consumer disclosures tied to high-risk and sensitive data contexts. Penalties and compliance requirements differ across states, creating distinct regulatory obligations for companies operating in each jurisdiction.

This fragmented system advantages large tech firms with the resources to navigate multiple regulatory regimes while hurting smaller competitors. It also weakens U.S. influence in setting global AI standards. In the absence of a federal U.S. approach, the EU and China have advanced clear national frameworks that fill the gap left by U.S. inaction, limiting Washington’s role in global AI policy despite its leadership in the technology.

In the absence of a federal U.S. approach, the EU and China have advanced clear national frameworks that fill the gap left by U.S. inaction

America Risks Becoming Rule-Taker, Not Rule-Maker

The AI industry remains divided on how the technology should be regulated. The U.S.-based AI firm Anthropic backs thoughtful regulation, while Trump’s AI and cryptocurrency czar, David Sacks, opposes state-level rules, calling them a form of regulatory capture. With federal inaction, the U.S. risks a widening regulatory minefield and the entrenchment of weak global standards—potentially shifting America from rule-maker to rule-taker in shaping the future of AI. This fragmentation raises questions about how effectively federal agencies can deploy AI across a wide range of vital functions, from national defense to weather forecasting and disease tracking.

America's AI Governance Gap

Although state legislation addresses real AI concerns, Congress’s paralysis represents a regulatory failure compared to the EU’s risk-based framework and China’s ability to impose uniform regulations almost overnight. The United States remains stuck in partisan gridlock amid lobbying pressure and uncertainty about how to regulate a rapidly changing industry without stifling innovation.

States Capitalize on Federal Vacuum

American states have long exploited gaps in federal regulations to attract firms, using tools such as tax breaks, permissive oversight, and business-friendly laws not covered by federal statutes. South Dakota, for example, drew banks by offering less stringent financial regulations, while Delaware leads in incorporation thanks to its specialized courts.

AI companies, however, face hurdles that differ sharply from those constraining banks. A national bank can apply South Dakota rates to any of its branches, while AI companies must abide by regulations in each state of operation. Operating in California requires compliance with one set of rules, while Colorado imposes another. Divergent AI regulation through multistate compliance overwhelms and overburdens firms.

Divergent AI regulation through multistate compliance overwhelms and overburdens firms

Regulatory Chaos Impacts Government Operations

The fragmentation extends beyond corporate compliance headaches and into core government functions. Federal agencies operate nationally but increasingly rely on AI systems developed by private-sector partners subject to the same regulatory quagmire. If the FBI, the DEA, and the ATF want to share intelligence on common AI analysis platforms, they could easily run into problems when contractors tailor systems to comply with differing state requirements.

Faced with the same patchwork of state data protection laws, contractors often default to the most restrictive standard, which is not always aligned with federal operational needs. A Department of Homeland Security contractor processing biometric data, for example, faces different legal obligations depending on whether the data was collected in California, Texas, or Illinois—even though federal security needs remain constant across state lines.

Compliance complexity not only raises costs but can also impair federal operations. When state laws impose conflicting requirements on how AI systems must handle sensitive data, federal agencies face a difficult choice: either fragment their data systems to comply with multiple jurisdictions and accept reduced AI capabilities or risk legal challenges from state attorneys general. This Rube Goldberg approach inhibits interagency cooperation since information sharing increasingly depends on AI analyses constrained by the most restrictive state requirements. In practice, this effectively allows the states with the most rigid regulations authority to set policy nationwide. Ironically, federal law—which is supposed to preempt laws on interstate commerce and national security—is instead constrained by a Balkanized regulatory system.

States Enact 145 AI Laws

The regulatory landscape could soon become more complex. Over 1,000 AI-related bills were introduced nationwide in 2024–2025, with around 145 passing—most targeting specific sectors or study panels. Only four states—Colorado, California, Utah, and Texas—adopted broad regulations governing private-sector AI use.

Even among the four states with comprehensive AI laws, key differences stand out. Colorado’s consumer-focused law covers individuals interacting with “high-risk” AI systems in areas such as employment, housing, education, healthcare, and financial services. It also provides detailed guidance on risk assessment. California, by contrast, lacks a single comprehensive AI statute but has enacted sectoral laws regulating transparency and generative AI, establishing specific obligations for each sector. These measures followed Governor Gavin Newsom’s veto of a comprehensive AI bill. The alternative he supported regulates transparency and training data in selected sectors and, unlike Colorado’s law, specifically addresses generative AI systems that create original content.

Leadership Void Exposes Strategic Vulnerability

Texas favors industry self-regulation, setting it apart from states with prescriptive statutory approaches. While AI firms face no broad state-level oversight, they must comply with targeted laws such as the Texas biometric privacy statute, highlighting the uneven scope of state AI regulation.

Utah’s law is narrower and emphasizes consumer protection. It requires AI companies to disclose the use of generative AI during select “high-risk” activities, such as handling sensitive data or making decisions in critical areas like finance, law, or medicine. Unlike broader state frameworks, Utah’s requirements are limited to specific interactions rather than comprehensive oversight.

The timing and severity of state laws also diverge. Colorado’s enforcement regime begins in June 2026, while California’s rules are already in effect. Penalties vary significantly—California can impose fines reaching tens of millions of dollars, whereas other states levy lower penalties or, in certain cases, reserve strict criminal sanctions for offenses such as deepfake election fraud.

The absence of a unified federal framework enables the EU and China to influence global AI standards, limiting the United States’ ability to guide international protocols and protect democratic values. The lack of federal leadership also creates unexpected strategic vulnerabilities for national security and defense. Dual-use AI systems—technologies with both civilian and military applications—are increasingly caught in a legal limbo. Computer vision systems trained on civilian data to identify images can be repurposed by military intelligence analysts, while AI models developed to optimize commercial shipping logistics can also support naval supply chains. Yet these dual-use systems remain subject to various state regulations designed for civilian consumer protection rather than national security imperatives.

State-by-State Rules Pose National Security Challenge

Conflicting state AI laws leave defense contractors in a legal grey area, slowing innovation and raising costs. Unlike China, which can swiftly channel civilian technology into military applications, the United States faces damaging delays. Until Congress acts, regulatory fragmentation is likely to continue undermining both AI leadership and national security effectiveness.

Conflicting state AI laws leave defense contractors in a legal grey area, slowing innovation and raising costs

Challenges extend beyond dual-use complications to strategic information security. Fragmented state data governance laws create significant vulnerabilities for AI systems that house Americans’ personal data—including biometric information, health records, financial transactions, and location histories. These systems are governed by radically different storage, retention, and sovereignty requirements. Illinois requires explicit biometric consent; California grants broad data deletion rights; Texas imposes minimal restrictions. As a result, the United States lacks a coherent framework for data governance, raising questions about where sensitive data can be stored, accessed, retained, and deleted.

The national security implications are profound. When adversaries try to harvest Americans’ personal information at scale for intelligence targeting, influence operations, or social engineering, they exploit the weakest link—in this case, the state with the flimsiest laws. Sophisticated actors scrutinize variations in data security governance for vulnerabilities. Since federal agencies lack clear authority to protect data security across state lines, foreign data harvesting complicates the establishment of uniform red lines. Ironically, the United States’ decentralized approach to data sovereignty—ostensibly designed to protect privacy—may instead increase citizens’ exposure to surveillance by hostile powers operating with no comparable constraints.

AI Industry Deeply Divided

Deep divisions within the AI industry further complicate the regulatory landscape. They became especially visible after Anthropic CEO Dario Amodei told 60 Minutes he was uncomfortable that decisions about AI rested solely with tech company leaders. The company developed Claude, an AI assistant trained on vast amounts of data to understand and generate human-like responses on a wide range of topics. Anthropic has endorsed the California measures requiring transparency reports, whistleblower protections, reporting of critical safety incidents, public disclosure on best practices, and annual reviews of safety policies.

Amodei’s position drew withering criticism from Sacks, who accused Anthropic of backing state-level regulations to woo state regulators and shape rules to its benefit, an allegation Anthropic denied. Sacks, a venture capitalist with numerous investments in tech, claimed Anthropic was responsible for the state regulatory minefield that endangers the start-up ecosystem of which he is a part. The dispute intensified after Anthropic opposed a proposed ten-year federal moratorium on state AI regulation in President Trump’s tax bill, which Amodei said was too blunt a measure given AI’s rapid development. The moratorium was ultimately removed before Trump signed the bill.

Nation’s AI Czar in Conflict of Interest

There is a logical way through the regulatory minefield: a federal AI regulation law establishing fair, uniform standards on the industry, overriding or preempting state laws. This approach is favored by Sacks, who has argued for federal legislation that would be weaker than proposals advanced by AI critics concerned about potential abuses. However, the preemption route would require congressional approval, and lawmakers overwhelmingly rejected it in 2025.

A logical way through the regulatory minefield: a federal AI regulation law establishing fair, uniform standards on the industry, overriding or preempting state laws

A credibility gap also shadows Sacks. As White House AI Czar, critics say his extensive investment holdings create a significant conflict of interest. Sacks came to Washington as a “special government employee,” a status conferred on individuals who work for the government less than 130 days a year and are therefore except from the congressional confirmation process and its associated financial scrutiny. Elon Musk had the same status when he worked at the White House under Trump. Sacks received an ethical waiver so he could maintain his position as a founding partner in Craft Ventures. In his White House job, he advocates federal pre-emption of state AI regulations while his venture capital firm has major investments in AI companies that would benefit from this lighter regulatory approach.

America's AI Governance Gap
US tech executives during a Senate hearing on “Winning the AI Race”. AFP

U.S. Global Influence Could Recede Without Action

AI advocates and critics alike agree that some form of regulation of this critical technology is necessary. The stakes extend far beyond corporate compliance costs or startup survival rates to questions of governance and strategic influence. America’s AI regulatory chaos represents a fundamental failure of governance at a critical technological juncture. Every month Washington delays action, the patchwork intensifies—not just across the fifty states, but also in the international arena, where rivals are writing rules that will govern AI’s role in society for generations.

Congress’s inaction is cloaked in irony. The country that pioneered artificial intelligence and continues to dominate its development now risks being sidelined, forced to conform to standards set in Brussels or Beijing because its own political system failed to rise to the occasion and produce a coherent national framework. Federal action would not eliminate all regulatory complexity or resolve every tension between innovation and safety. But without uniform laws, the United States faces a future in which its leading AI companies navigate a regulatory obstacle course that favors deep-pocketed incumbents over nimble competitors. In such a world, state-level forum shopping replaces principled governance, and American influence over civilization-shaping technology steadily erodes. The window for federal leadership remains open, but it’s closing rapidly. And what replaces coherent national policy may prove far more dangerous than the regulations the tech industry fears today.