AI Regulatory Comparison
Compare the six major AI regulatory frameworks across key dimensions. Toggle frameworks on or off to focus your comparison.
| Dimension | ||||||
|---|---|---|---|---|---|---|
| Type | Binding law (EU Regulation) | Voluntary framework | Binding laws (state-level statutes and local ordinances) | Executive orders, agency guidance, and existing statutory authority | Existing federal and state statutes applied to AI use cases | Mix of binding laws, voluntary frameworks, international treaties, and standards |
| Geographic Scope | European Union (extraterritorial — applies to any AI system whose output is used in the EU) | United States (increasingly referenced globally) | United States — varies by state (Colorado, Illinois, Texas, California, NYC, Connecticut, and others) | United States (federal level) | United States (federal and state level, sector-specific) | Global — EU, China, UK, Canada, Brazil, Japan, South Korea, Singapore, Australia, and international bodies (OECD, Council of Europe, G7, UNESCO) |
| Who It Applies To | Providers, deployers, importers, distributors, and authorized representatives of AI systems placed on or used in the EU market | All organizations that design, develop, deploy, or use AI systems — voluntary adoption, but increasingly expected | Developers and deployers of AI systems used in consequential decisions, varying by state. Colorado covers both; NYC targets employers using automated hiring tools. | Federal agencies, federal contractors, and any organization subject to FTC, EEOC, FDA, SEC, CFPB, or other federal agency authority | Organizations using AI in healthcare, financial services, employment, insurance, and education. Covers both AI developers and deployers within each regulated sector. | Organizations operating across borders. Specific requirements depend on which jurisdictions an organization operates in or serves. |
| Risk-Based? | Yes — four-tier classification: Unacceptable (banned), High (strict obligations), Limited (transparency), Minimal (no obligations) | Yes — encourages risk-based prioritization through the Map, Measure, and Manage functions. No fixed tiers, but organizations prioritize by potential impact. | Partially — most laws focus on "high-risk" or "consequential" decisions (employment, lending, insurance, housing, healthcare, education). Low-risk AI uses generally unaffected. | No unified risk framework at the federal level. Individual agencies apply risk-based approaches within their domains (FDA risk classification, EEOC adverse impact analysis). | Implicitly — existing regulations already target high-stakes decisions. Nondiscrimination, consumer protection, and safety laws apply proportional to the risk of the AI use case. | Generally yes — risk-based approaches dominate globally (EU AI Act, Brazil PL 2338, South Korea Framework Act). China uses an application-specific model. UK relies on sector regulators. |
| Key Requirements |
|
|
|
|
|
|
| Penalties | Up to EUR 35 million or 7% of global annual turnover for prohibited practices; EUR 15 million or 3% for other violations; EUR 7.5 million or 1.5% for misleading information. Lower-of calculation for SMEs and startups. | None directly — the framework is voluntary. However, adoption is referenced as an affirmative defense or evidence of reasonable care under Colorado SB 24-205 and other state laws. | Varies widely. NYC: $500-$1,500 per violation. Illinois BIPA: $1,000-$5,000 per violation (private right of action). California SB 53: up to $1M per violation. Texas TRAIGA: $10,000-$200,000 per violation. Colorado: AG enforcement only. | No standalone AI penalties. Enforcement through existing statutory mechanisms: FTC fines for deceptive practices, EEOC enforcement for employment discrimination, FDA enforcement for non-compliant medical devices, SEC penalties for AI washing. | Varies by statute. FDA: device recalls and injunctions. CFPB: civil penalties and restitution. EEOC: compensatory and punitive damages. Illinois BIPA: $1,000-$5,000 per violation. State insurance regulators: license revocation and fines. | Varies dramatically by jurisdiction. EU: up to 7% of global turnover. China: administrative penalties and service suspension. UK: sector-specific regulator penalties. Brazil (proposed): up to 2% of revenue. Many frameworks are voluntary with no direct penalties. |
| Effective Date / Status | Entered into force August 1, 2024. Phased enforcement: prohibited practices (Feb 2025), GPAI rules (Aug 2025), high-risk systems (Aug 2026), full enforcement (Aug 2027). | Published January 26, 2023. Generative AI Profile (AI 600-1) published July 2024. Continuously updated with community profiles and companion resources. | Varies by state. NYC LL144 in effect since July 2023. Colorado SB 24-205 effective June 30, 2026. California SB 53 and Texas TRAIGA effective January 2026. New laws introduced constantly — 1,208 AI bills across 50 states in 2025. | Ongoing. EO 14148 signed January 20, 2025. December 2025 EO on federal preemption. OMB M-25-21 issued April 2025. Agency enforcement is continuous. | Already in effect — these are existing laws being applied to AI. FDA has authorized over 1,250 AI/ML medical devices. CFPB, EEOC, SEC, and state regulators actively enforcing. | Ongoing and expanding. China has enforced AI regulations since 2022. Council of Europe Convention in force November 2025. South Korea AI Framework Act effective January 2026. Japan AI Promotion Act in effect since September 2025. Brazil legislation pending. |
| Enforcement Body | European AI Office (GPAI models) and national competent authorities in each EU member state | NIST is a non-regulatory agency (no enforcement power). The framework is referenced by regulators including state attorneys general (Colorado) and federal agencies (OMB). | State attorneys general (Colorado, Texas, California). NYC Department of Consumer and Worker Protection. Illinois: private right of action under BIPA. Federal preemption threat from DOJ AI Litigation Task Force (est. Jan 2026). | Multiple agencies: FTC, EEOC, FDA, SEC, CFPB, HUD, DOJ. Each enforces existing law within its jurisdiction. DOJ AI Litigation Task Force targets state AI laws. | Sector-specific: FDA (healthcare), CFPB/OCC/Federal Reserve (finance), EEOC (employment), state insurance commissioners (insurance), Department of Education (education). State AGs also enforce across sectors. | Varies: European AI Office (EU), Cyberspace Administration of China, sector regulators (UK FCA, ICO, CMA, etc.), national competent authorities in each country. No single global enforcement body. |
| Best For | Any organization placing AI systems on the EU market or whose AI outputs reach EU users. Essential for AI providers, tech companies with European customers, and multinationals. | U.S.-based organizations seeking a recognized governance baseline. Essential for federal contractors, companies subject to state AI laws, and organizations wanting a defensible AI risk management program. | Any organization operating in the U.S. that uses AI for employment, lending, insurance, housing, healthcare, or education decisions. Especially critical for companies operating in multiple states. | Organizations selling to or working with the federal government, companies in regulated industries (healthcare, finance, employment), and any organization tracking the federal vs. state regulatory dynamic. | Organizations deploying AI in any regulated industry. Healthcare providers, financial institutions, employers, insurers, and educational institutions should prioritize understanding which existing rules already apply to their AI systems. | Multinational organizations and companies with customers or operations outside the U.S. Essential for understanding how regulatory requirements differ across borders and planning cross-jurisdictional compliance strategies. |
Scroll horizontally to see all selected frameworks
Binding law (EU Regulation)
European Union (extraterritorial — applies to any AI system whose output is used in the EU)
Providers, deployers, importers, distributors, and authorized representatives of AI systems placed on or used in the EU market
Yes — four-tier classification: Unacceptable (banned), High (strict obligations), Limited (transparency), Minimal (no obligations)
- Prohibited AI practices banned since February 2025 (social scoring, manipulative AI, untargeted facial recognition scraping)
- High-risk systems require conformity assessments, risk management, data governance, human oversight, and CE marking
- General-purpose AI model providers must publish training data summaries and share documentation with downstream providers
- Transparency obligations for chatbots, deepfakes, and emotion recognition systems
- Mandatory incident reporting for serious AI-related incidents
Up to EUR 35 million or 7% of global annual turnover for prohibited practices; EUR 15 million or 3% for other violations; EUR 7.5 million or 1.5% for misleading information. Lower-of calculation for SMEs and startups.
Entered into force August 1, 2024. Phased enforcement: prohibited practices (Feb 2025), GPAI rules (Aug 2025), high-risk systems (Aug 2026), full enforcement (Aug 2027).
European AI Office (GPAI models) and national competent authorities in each EU member state
Any organization placing AI systems on the EU market or whose AI outputs reach EU users. Essential for AI providers, tech companies with European customers, and multinationals.
Voluntary framework
United States (increasingly referenced globally)
All organizations that design, develop, deploy, or use AI systems — voluntary adoption, but increasingly expected
Yes — encourages risk-based prioritization through the Map, Measure, and Manage functions. No fixed tiers, but organizations prioritize by potential impact.
- Govern: Establish governance structures, policies, roles, and accountability for AI risk management
- Map: Identify and document AI system context, stakeholders, intended use, and potential harms
- Measure: Test and evaluate AI systems against trustworthy AI characteristics (fairness, safety, security, transparency, explainability, privacy, validity)
- Manage: Prioritize and respond to identified risks, implement mitigations, and monitor effectiveness
- Generative AI Profile (AI 600-1) addresses risks specific to large language models and generative systems
None directly — the framework is voluntary. However, adoption is referenced as an affirmative defense or evidence of reasonable care under Colorado SB 24-205 and other state laws.
Published January 26, 2023. Generative AI Profile (AI 600-1) published July 2024. Continuously updated with community profiles and companion resources.
NIST is a non-regulatory agency (no enforcement power). The framework is referenced by regulators including state attorneys general (Colorado) and federal agencies (OMB).
U.S.-based organizations seeking a recognized governance baseline. Essential for federal contractors, companies subject to state AI laws, and organizations wanting a defensible AI risk management program.
Binding laws (state-level statutes and local ordinances)
United States — varies by state (Colorado, Illinois, Texas, California, NYC, Connecticut, and others)
Developers and deployers of AI systems used in consequential decisions, varying by state. Colorado covers both; NYC targets employers using automated hiring tools.
Partially — most laws focus on "high-risk" or "consequential" decisions (employment, lending, insurance, housing, healthcare, education). Low-risk AI uses generally unaffected.
- Algorithmic impact assessments for high-risk AI (Colorado, Connecticut)
- Annual bias audits for automated hiring tools (NYC Local Law 144)
- Consumer disclosure when AI is used in decisions affecting individuals (Colorado, Utah, Texas)
- Developer-to-deployer documentation requirements (Colorado)
- Frontier AI transparency: safety frameworks, catastrophic risk assessments, incident reporting (California SB 53)
Varies widely. NYC: $500-$1,500 per violation. Illinois BIPA: $1,000-$5,000 per violation (private right of action). California SB 53: up to $1M per violation. Texas TRAIGA: $10,000-$200,000 per violation. Colorado: AG enforcement only.
Varies by state. NYC LL144 in effect since July 2023. Colorado SB 24-205 effective June 30, 2026. California SB 53 and Texas TRAIGA effective January 2026. New laws introduced constantly — 1,208 AI bills across 50 states in 2025.
State attorneys general (Colorado, Texas, California). NYC Department of Consumer and Worker Protection. Illinois: private right of action under BIPA. Federal preemption threat from DOJ AI Litigation Task Force (est. Jan 2026).
Any organization operating in the U.S. that uses AI for employment, lending, insurance, housing, healthcare, or education decisions. Especially critical for companies operating in multiple states.
Executive orders, agency guidance, and existing statutory authority
United States (federal level)
Federal agencies, federal contractors, and any organization subject to FTC, EEOC, FDA, SEC, CFPB, or other federal agency authority
No unified risk framework at the federal level. Individual agencies apply risk-based approaches within their domains (FDA risk classification, EEOC adverse impact analysis).
- EO 14148 (Jan 2025): Pro-innovation directive; agencies instructed to remove barriers to AI development
- Existing statutory authority: FTC Act (deceptive AI practices), Title VII (AI hiring discrimination), ECOA/FCRA (AI lending), FDA (AI medical devices)
- OMB M-25-21: Innovation-focused guidance replacing Biden-era risk management requirements for federal AI use
- December 2025 EO: Federal preemption of state AI laws, DOJ AI Litigation Task Force, FCC federal reporting standard
- NIST AI RMF remains referenced as a compliance framework across federal guidance
No standalone AI penalties. Enforcement through existing statutory mechanisms: FTC fines for deceptive practices, EEOC enforcement for employment discrimination, FDA enforcement for non-compliant medical devices, SEC penalties for AI washing.
Ongoing. EO 14148 signed January 20, 2025. December 2025 EO on federal preemption. OMB M-25-21 issued April 2025. Agency enforcement is continuous.
Multiple agencies: FTC, EEOC, FDA, SEC, CFPB, HUD, DOJ. Each enforces existing law within its jurisdiction. DOJ AI Litigation Task Force targets state AI laws.
Organizations selling to or working with the federal government, companies in regulated industries (healthcare, finance, employment), and any organization tracking the federal vs. state regulatory dynamic.
Existing federal and state statutes applied to AI use cases
United States (federal and state level, sector-specific)
Organizations using AI in healthcare, financial services, employment, insurance, and education. Covers both AI developers and deployers within each regulated sector.
Implicitly — existing regulations already target high-stakes decisions. Nondiscrimination, consumer protection, and safety laws apply proportional to the risk of the AI use case.
- Healthcare: FDA approval for AI medical devices, HIPAA compliance for AI processing PHI, Section 1557 nondiscrimination
- Finance: ECOA/FCRA adverse action explanations for AI credit decisions, OCC model risk management (SR 11-7), CFPB fair lending enforcement
- Employment: Title VII adverse impact analysis for AI hiring tools, ADA accommodations, employer liability for vendor AI
- Insurance: NAIC model bulletin on AI, unfair discrimination testing, Colorado SB 21-169 insurance AI regulations
- Education: FERPA for AI processing student data, COPPA for children under 13, equity monitoring under Title VI
Varies by statute. FDA: device recalls and injunctions. CFPB: civil penalties and restitution. EEOC: compensatory and punitive damages. Illinois BIPA: $1,000-$5,000 per violation. State insurance regulators: license revocation and fines.
Already in effect — these are existing laws being applied to AI. FDA has authorized over 1,250 AI/ML medical devices. CFPB, EEOC, SEC, and state regulators actively enforcing.
Sector-specific: FDA (healthcare), CFPB/OCC/Federal Reserve (finance), EEOC (employment), state insurance commissioners (insurance), Department of Education (education). State AGs also enforce across sectors.
Organizations deploying AI in any regulated industry. Healthcare providers, financial institutions, employers, insurers, and educational institutions should prioritize understanding which existing rules already apply to their AI systems.
Mix of binding laws, voluntary frameworks, international treaties, and standards
Global — EU, China, UK, Canada, Brazil, Japan, South Korea, Singapore, Australia, and international bodies (OECD, Council of Europe, G7, UNESCO)
Organizations operating across borders. Specific requirements depend on which jurisdictions an organization operates in or serves.
Generally yes — risk-based approaches dominate globally (EU AI Act, Brazil PL 2338, South Korea Framework Act). China uses an application-specific model. UK relies on sector regulators.
- Council of Europe Framework Convention on AI (binding treaty, in force Nov 2025): requires AI respect for human rights, risk assessments, transparency
- China: Algorithm registration, content labeling, generative AI service rules, cybersecurity law amendments
- UK: Sector-led regulation via five cross-sector principles (safety, transparency, fairness, accountability, contestability)
- ISO/IEC 42001: Certifiable AI management system standard for demonstrating governance maturity
- OECD AI Principles: Endorsed by 45+ countries as baseline governance framework
Varies dramatically by jurisdiction. EU: up to 7% of global turnover. China: administrative penalties and service suspension. UK: sector-specific regulator penalties. Brazil (proposed): up to 2% of revenue. Many frameworks are voluntary with no direct penalties.
Ongoing and expanding. China has enforced AI regulations since 2022. Council of Europe Convention in force November 2025. South Korea AI Framework Act effective January 2026. Japan AI Promotion Act in effect since September 2025. Brazil legislation pending.
Varies: European AI Office (EU), Cyberspace Administration of China, sector regulators (UK FCA, ICO, CMA, etc.), national competent authorities in each country. No single global enforcement body.
Multinational organizations and companies with customers or operations outside the U.S. Essential for understanding how regulatory requirements differ across borders and planning cross-jurisdictional compliance strategies.
Disclaimer: Content on AIRegReady is educational and does not constitute legal advice. Regulatory summaries are simplified for clarity and may not capture every nuance of the underlying law or guidance. Consult qualified legal counsel for specific compliance obligations. Information was accurate as of the date noted but regulations change frequently.
Explore More
Find out which AI regulations apply to your organization with our free interactive assessment.
Plain-English definitions for AI compliance terms, from algorithmic impact assessments to risk tiers.
Practical analysis of new AI laws, enforcement trends, and compliance strategies you can act on.