Skip to main content
Side-by-Side Analysis

AI Regulatory Comparison

Compare the six major AI regulatory frameworks across key dimensions. Toggle frameworks on or off to focus your comparison.

Type

Binding law (EU Regulation)

Geographic Scope

European Union (extraterritorial — applies to any AI system whose output is used in the EU)

Who It Applies To

Providers, deployers, importers, distributors, and authorized representatives of AI systems placed on or used in the EU market

Risk-Based?

Yes — four-tier classification: Unacceptable (banned), High (strict obligations), Limited (transparency), Minimal (no obligations)

Key Requirements
  • Prohibited AI practices banned since February 2025 (social scoring, manipulative AI, untargeted facial recognition scraping)
  • High-risk systems require conformity assessments, risk management, data governance, human oversight, and CE marking
  • General-purpose AI model providers must publish training data summaries and share documentation with downstream providers
  • Transparency obligations for chatbots, deepfakes, and emotion recognition systems
  • Mandatory incident reporting for serious AI-related incidents
Penalties

Up to EUR 35 million or 7% of global annual turnover for prohibited practices; EUR 15 million or 3% for other violations; EUR 7.5 million or 1.5% for misleading information. Lower-of calculation for SMEs and startups.

Effective Date / Status

Entered into force August 1, 2024. Phased enforcement: prohibited practices (Feb 2025), GPAI rules (Aug 2025), high-risk systems (Aug 2026), full enforcement (Aug 2027).

Enforcement Body

European AI Office (GPAI models) and national competent authorities in each EU member state

Best For

Any organization placing AI systems on the EU market or whose AI outputs reach EU users. Essential for AI providers, tech companies with European customers, and multinationals.

Type

Voluntary framework

Geographic Scope

United States (increasingly referenced globally)

Who It Applies To

All organizations that design, develop, deploy, or use AI systems — voluntary adoption, but increasingly expected

Risk-Based?

Yes — encourages risk-based prioritization through the Map, Measure, and Manage functions. No fixed tiers, but organizations prioritize by potential impact.

Key Requirements
  • Govern: Establish governance structures, policies, roles, and accountability for AI risk management
  • Map: Identify and document AI system context, stakeholders, intended use, and potential harms
  • Measure: Test and evaluate AI systems against trustworthy AI characteristics (fairness, safety, security, transparency, explainability, privacy, validity)
  • Manage: Prioritize and respond to identified risks, implement mitigations, and monitor effectiveness
  • Generative AI Profile (AI 600-1) addresses risks specific to large language models and generative systems
Penalties

None directly — the framework is voluntary. However, adoption is referenced as an affirmative defense or evidence of reasonable care under Colorado SB 24-205 and other state laws.

Effective Date / Status

Published January 26, 2023. Generative AI Profile (AI 600-1) published July 2024. Continuously updated with community profiles and companion resources.

Enforcement Body

NIST is a non-regulatory agency (no enforcement power). The framework is referenced by regulators including state attorneys general (Colorado) and federal agencies (OMB).

Best For

U.S.-based organizations seeking a recognized governance baseline. Essential for federal contractors, companies subject to state AI laws, and organizations wanting a defensible AI risk management program.

Type

Binding laws (state-level statutes and local ordinances)

Geographic Scope

United States — varies by state (Colorado, Illinois, Texas, California, NYC, Connecticut, and others)

Who It Applies To

Developers and deployers of AI systems used in consequential decisions, varying by state. Colorado covers both; NYC targets employers using automated hiring tools.

Risk-Based?

Partially — most laws focus on "high-risk" or "consequential" decisions (employment, lending, insurance, housing, healthcare, education). Low-risk AI uses generally unaffected.

Key Requirements
  • Algorithmic impact assessments for high-risk AI (Colorado, Connecticut)
  • Annual bias audits for automated hiring tools (NYC Local Law 144)
  • Consumer disclosure when AI is used in decisions affecting individuals (Colorado, Utah, Texas)
  • Developer-to-deployer documentation requirements (Colorado)
  • Frontier AI transparency: safety frameworks, catastrophic risk assessments, incident reporting (California SB 53)
Penalties

Varies widely. NYC: $500-$1,500 per violation. Illinois BIPA: $1,000-$5,000 per violation (private right of action). California SB 53: up to $1M per violation. Texas TRAIGA: $10,000-$200,000 per violation. Colorado: AG enforcement only.

Effective Date / Status

Varies by state. NYC LL144 in effect since July 2023. Colorado SB 24-205 effective June 30, 2026. California SB 53 and Texas TRAIGA effective January 2026. New laws introduced constantly — 1,208 AI bills across 50 states in 2025.

Enforcement Body

State attorneys general (Colorado, Texas, California). NYC Department of Consumer and Worker Protection. Illinois: private right of action under BIPA. Federal preemption threat from DOJ AI Litigation Task Force (est. Jan 2026).

Best For

Any organization operating in the U.S. that uses AI for employment, lending, insurance, housing, healthcare, or education decisions. Especially critical for companies operating in multiple states.

Type

Executive orders, agency guidance, and existing statutory authority

Geographic Scope

United States (federal level)

Who It Applies To

Federal agencies, federal contractors, and any organization subject to FTC, EEOC, FDA, SEC, CFPB, or other federal agency authority

Risk-Based?

No unified risk framework at the federal level. Individual agencies apply risk-based approaches within their domains (FDA risk classification, EEOC adverse impact analysis).

Key Requirements
  • EO 14148 (Jan 2025): Pro-innovation directive; agencies instructed to remove barriers to AI development
  • Existing statutory authority: FTC Act (deceptive AI practices), Title VII (AI hiring discrimination), ECOA/FCRA (AI lending), FDA (AI medical devices)
  • OMB M-25-21: Innovation-focused guidance replacing Biden-era risk management requirements for federal AI use
  • December 2025 EO: Federal preemption of state AI laws, DOJ AI Litigation Task Force, FCC federal reporting standard
  • NIST AI RMF remains referenced as a compliance framework across federal guidance
Penalties

No standalone AI penalties. Enforcement through existing statutory mechanisms: FTC fines for deceptive practices, EEOC enforcement for employment discrimination, FDA enforcement for non-compliant medical devices, SEC penalties for AI washing.

Effective Date / Status

Ongoing. EO 14148 signed January 20, 2025. December 2025 EO on federal preemption. OMB M-25-21 issued April 2025. Agency enforcement is continuous.

Enforcement Body

Multiple agencies: FTC, EEOC, FDA, SEC, CFPB, HUD, DOJ. Each enforces existing law within its jurisdiction. DOJ AI Litigation Task Force targets state AI laws.

Best For

Organizations selling to or working with the federal government, companies in regulated industries (healthcare, finance, employment), and any organization tracking the federal vs. state regulatory dynamic.

Type

Existing federal and state statutes applied to AI use cases

Geographic Scope

United States (federal and state level, sector-specific)

Who It Applies To

Organizations using AI in healthcare, financial services, employment, insurance, and education. Covers both AI developers and deployers within each regulated sector.

Risk-Based?

Implicitly — existing regulations already target high-stakes decisions. Nondiscrimination, consumer protection, and safety laws apply proportional to the risk of the AI use case.

Key Requirements
  • Healthcare: FDA approval for AI medical devices, HIPAA compliance for AI processing PHI, Section 1557 nondiscrimination
  • Finance: ECOA/FCRA adverse action explanations for AI credit decisions, OCC model risk management (SR 11-7), CFPB fair lending enforcement
  • Employment: Title VII adverse impact analysis for AI hiring tools, ADA accommodations, employer liability for vendor AI
  • Insurance: NAIC model bulletin on AI, unfair discrimination testing, Colorado SB 21-169 insurance AI regulations
  • Education: FERPA for AI processing student data, COPPA for children under 13, equity monitoring under Title VI
Penalties

Varies by statute. FDA: device recalls and injunctions. CFPB: civil penalties and restitution. EEOC: compensatory and punitive damages. Illinois BIPA: $1,000-$5,000 per violation. State insurance regulators: license revocation and fines.

Effective Date / Status

Already in effect — these are existing laws being applied to AI. FDA has authorized over 1,250 AI/ML medical devices. CFPB, EEOC, SEC, and state regulators actively enforcing.

Enforcement Body

Sector-specific: FDA (healthcare), CFPB/OCC/Federal Reserve (finance), EEOC (employment), state insurance commissioners (insurance), Department of Education (education). State AGs also enforce across sectors.

Best For

Organizations deploying AI in any regulated industry. Healthcare providers, financial institutions, employers, insurers, and educational institutions should prioritize understanding which existing rules already apply to their AI systems.

Type

Mix of binding laws, voluntary frameworks, international treaties, and standards

Geographic Scope

Global — EU, China, UK, Canada, Brazil, Japan, South Korea, Singapore, Australia, and international bodies (OECD, Council of Europe, G7, UNESCO)

Who It Applies To

Organizations operating across borders. Specific requirements depend on which jurisdictions an organization operates in or serves.

Risk-Based?

Generally yes — risk-based approaches dominate globally (EU AI Act, Brazil PL 2338, South Korea Framework Act). China uses an application-specific model. UK relies on sector regulators.

Key Requirements
  • Council of Europe Framework Convention on AI (binding treaty, in force Nov 2025): requires AI respect for human rights, risk assessments, transparency
  • China: Algorithm registration, content labeling, generative AI service rules, cybersecurity law amendments
  • UK: Sector-led regulation via five cross-sector principles (safety, transparency, fairness, accountability, contestability)
  • ISO/IEC 42001: Certifiable AI management system standard for demonstrating governance maturity
  • OECD AI Principles: Endorsed by 45+ countries as baseline governance framework
Penalties

Varies dramatically by jurisdiction. EU: up to 7% of global turnover. China: administrative penalties and service suspension. UK: sector-specific regulator penalties. Brazil (proposed): up to 2% of revenue. Many frameworks are voluntary with no direct penalties.

Effective Date / Status

Ongoing and expanding. China has enforced AI regulations since 2022. Council of Europe Convention in force November 2025. South Korea AI Framework Act effective January 2026. Japan AI Promotion Act in effect since September 2025. Brazil legislation pending.

Enforcement Body

Varies: European AI Office (EU), Cyberspace Administration of China, sector regulators (UK FCA, ICO, CMA, etc.), national competent authorities in each country. No single global enforcement body.

Best For

Multinational organizations and companies with customers or operations outside the U.S. Essential for understanding how regulatory requirements differ across borders and planning cross-jurisdictional compliance strategies.

Disclaimer: Content on AIRegReady is educational and does not constitute legal advice. Regulatory summaries are simplified for clarity and may not capture every nuance of the underlying law or guidance. Consult qualified legal counsel for specific compliance obligations. Information was accurate as of the date noted but regulations change frequently.