Skip to main content
Federal Policy · 7 min read

The Federal Push to Preempt State AI Laws: What It Means for Compliance

In December 2025, the White House issued an executive order directing federal agencies to evaluate whether state AI laws should be preempted by federal authority. The order frames state-level AI regulation as a threat to innovation and national competitiveness, and it sets up a process for the federal government to override state laws it considers duplicative or burdensome.

If you're a compliance officer who spent 2025 building programs around Colorado's SB 24-205, the Illinois AI Video Interview Act, NYC Local Law 144, and the growing patchwork of state AI requirements, your first reaction was probably a mix of relief and anxiety. Relief because a single federal standard would be simpler. Anxiety because "preemption is coming" is not the same as "you can stop complying."

Here's what the executive order actually does, where the boundaries are, and what it means for your compliance strategy today.

What the Executive Order Does

The order does three concrete things.

First, it establishes the DOJ AI Litigation Task Force. This task force is charged with identifying state and local AI regulations that conflict with federal policy or impose undue burdens on AI development and deployment. The task force has authority to intervene in litigation challenging state laws and to file amicus briefs supporting preemption arguments. It's a litigation tool, not a legislative one — it works through the courts, not through Congress.

Second, it directs the Commerce Department to evaluate state AI laws. The Secretary of Commerce is tasked with conducting a comprehensive review of state and local AI regulations and recommending which ones should be preempted. The review is supposed to assess whether state laws create conflicting compliance obligations, hinder interstate commerce, or undermine federal AI policy objectives. Commerce has 180 days to deliver its initial findings — putting the report around June 2026.

Third, it articulates a federal policy preference for "light-touch" AI regulation. The order states that federal AI policy should promote innovation, avoid duplicative compliance burdens, and ensure that AI regulation does not give foreign competitors an advantage. This is the policy backdrop against which the DOJ task force and Commerce Department will operate.

What It Doesn't Do

An executive order is not a law. It directs executive branch agencies on how to exercise their existing authority. It does not — and cannot — directly repeal or override state statutes. Here's what the order does not accomplish.

It does not preempt any state law today. The DOJ task force and Commerce Department review are processes that take time. Until a court rules that a specific state law is preempted by federal authority, or Congress passes legislation that explicitly preempts state AI laws, those state laws remain in effect and enforceable.

It does not create a federal AI regulatory framework. The order criticizes state regulation but does not replace it with a comprehensive federal alternative. This is a recurring tension in the federal approach: the argument for preemption is strongest when there's a clear federal standard to preempt with, and that standard doesn't exist yet.

It does not override state consumer protection or civil rights authority. The order includes explicit carve-outs acknowledging that states retain authority over consumer protection and civil rights. This is significant because many state AI laws — particularly those governing hiring, lending, and insurance — are grounded in consumer protection and anti-discrimination frameworks. The carve-outs may limit how broadly the preemption argument can reach.

The Tension at the Center

The executive order reflects a genuine tension between two legitimate policy goals. On one side is the argument that a patchwork of state AI regulations creates compliance chaos — different disclosure requirements, different impact assessment standards, different enforcement mechanisms across fifty states. That's a real cost, especially for companies deploying AI systems nationally.

On the other side is the argument that state regulation fills a gap the federal government has not filled. As of March 2026, there is no comprehensive federal AI law. The only AI-specific federal legislation of substance is narrow and sector-focused. States have stepped in because they see real harms — discriminatory hiring algorithms, opaque credit decisioning, invasive surveillance — and their constituents are demanding action.

This tension is not new. Federal preemption debates have played out in banking regulation (federal preemption of state usury laws under the National Bank Act), telecommunications (the FCC's attempts to preempt state net neutrality rules), and data privacy (the ongoing discussion about whether a federal privacy law should preempt state laws like CCPA). In each of these areas, the preemption question has been messy, contested, and ultimately resolved through a combination of litigation, legislation, and negotiation. AI will follow the same pattern.

What the Commerce Department Review Will Look At

The Commerce Department's evaluation is the most consequential piece of the executive order for compliance planning. Here's what's likely on the table.

Colorado's SB 24-205 is the most comprehensive state AI law and the most likely target. It imposes impact assessment, disclosure, and risk management obligations on deployers of "high-risk AI systems." The Commerce Department will likely argue it creates compliance costs that duplicate (or conflict with) emerging federal approaches like the NIST AI RMF.

Illinois's AI Video Interview Act, which requires consent and disclosure when AI is used to analyze video interviews, is narrower but still on the radar. NYC Local Law 144, requiring bias audits for automated employment decision tools, is a local ordinance but has influenced the national conversation.

The laws most likely to survive a preemption challenge are those firmly grounded in existing state authority over employment discrimination, consumer protection, and insurance regulation. The laws most vulnerable are those that create novel AI-specific regulatory frameworks that don't map clearly to traditional state regulatory authority.

Historical Precedent Suggests a Long Road

If you're hoping the preemption question will be settled quickly, history suggests otherwise.

The federal preemption of state data breach notification laws has been debated for over fifteen years without resolution. A federal privacy law that would preempt CCPA and similar state laws has been proposed in multiple sessions of Congress and has never passed. The Dodd-Frank Act's preemption provisions for state banking laws are still being litigated more than a decade after passage.

AI preemption will likely follow a similar trajectory. The executive order starts the process, but resolution will require either Congressional action (which moves slowly and faces partisan divides on tech regulation) or court decisions on specific state laws (which take years to work through the system).

The DOJ task force can accelerate things by intervening in existing litigation, but courts are not bound to adopt the federal government's preemption arguments. And the explicit carve-outs for consumer protection and civil rights give states strong footing to defend their laws.

Practical Advice: Don't Abandon State Compliance

Given all of this, what should you actually do?

Maintain dual-track compliance for now. Continue complying with applicable state AI laws. They are currently in effect and enforceable. The executive order does not change that. If you abandon state compliance based on the hope of future preemption and that preemption doesn't materialize — or takes years to materialize — you've created a compliance gap with real enforcement risk.

Monitor the Commerce Department review. The initial findings, expected around June 2026, will signal which state laws the federal government considers most problematic. That will help you prioritize and allocate resources. Subscribe to Commerce Department updates and follow the AI Office's publications.

Build your compliance program around frameworks, not specific laws. If your AI governance program is structured around the NIST AI RMF or a similar risk management framework, you're building capabilities that satisfy multiple regulatory requirements simultaneously. Impact assessments, risk management, transparency, and human oversight are common threads across virtually all AI regulations — state, federal, and international. A framework-based approach is resilient to regulatory changes because the underlying compliance activities remain valuable regardless of which specific laws end up applying.

Track litigation. The DOJ task force's interventions in state AI law litigation will be the earliest concrete indicators of which preemption arguments have traction. Follow the cases. They'll tell you more about the likely outcome than policy statements will.

Don't forget the EU AI Act. While the federal government is trying to reduce domestic AI regulation, the EU AI Act's extraterritorial reach means any organization with EU customers or users faces a comprehensive international AI regulatory framework regardless of what happens with U.S. state laws. For many organizations, the EU AI Act is the binding constraint, and state law compliance is a subset of what they already need to do.

Key Takeaways

  • The December 2025 executive order creates a process for federal preemption of state AI laws but does not preempt any state law today. All existing state AI regulations remain in effect.
  • The Commerce Department's review (findings expected around June 2026) will signal which state laws the federal government considers most problematic.
  • Explicit carve-outs for state consumer protection and civil rights authority may limit how far preemption can reach, especially for AI laws grounded in anti-discrimination and consumer protection frameworks.
  • Maintain dual-track compliance with both state and federal requirements. Build your program around risk management frameworks like the NIST AI RMF to stay resilient regardless of which specific laws survive.

Related Regulations

Sources & References

Disclaimer: Content on AIRegReady is educational and does not constitute legal advice. Regulatory summaries are simplified for clarity and may not capture every nuance of the underlying law or guidance. Consult qualified legal counsel for specific compliance obligations. Information was accurate as of the date noted but regulations change frequently.