Key Dates
AI Regulation Timeline
Major milestones and upcoming deadlines across all AI regulatory frameworks
Filter by framework
Time period
Showing 88 events
2019
OECD AI Principles adopted by 42 countries, establishing the first widely endorsed international AI governance framework.
2020
Illinois AI Video Interview Act (AIVITA) takes effect, one of the first U.S. laws directly regulating AI in employment.
2021
European Commission publishes the initial proposal for the AI Act (COM/2021/206), launching the legislative process.
NIST issues a Request for Information (RFI) seeking public input on AI risk management, beginning the development process that would lead to the AI RMF.
UNESCO Recommendation on the Ethics of AI adopted by all 193 member states.
2022
NIST publishes the initial draft of the AI Risk Management Framework for public comment, receiving extensive feedback from more than 240 organizations.
CFPB issues Circular 2022-03 on adverse action notice requirements for algorithmic credit decisions.
China’s Algorithmic Recommendation Management Provisions take effect—the first binding regulation targeting recommendation algorithms.
EEOC launches initiative on AI and algorithmic fairness in employment, pledging technical guidance.
Canada introduces Bill C-27 including AIDA, proposed as the country’s first comprehensive AI law.
NIST releases a second draft of the AI RMF incorporating public feedback, with additional workshops and comment periods to refine the framework.
White House OSTP releases the Blueprint for an AI Bill of Rights, outlining five principles for automated systems. Non-binding but influential.
2023
Alongside the AI RMF, NIST releases the companion AI RMF Playbook, providing practical guidance and suggested actions for each subcategory of the framework.
NIST publishes the AI Risk Management Framework (AI RMF 1.0), establishing a voluntary governance framework for AI systems.
Multiple agencies issue AI-specific guidance: FTC on deceptive AI practices, FDA on AI medical devices, SEC on AI in financial services, HUD on AI in housing.
EEOC releases technical assistance document on AI and Title VII adverse impact analysis.
China’s Deep Synthesis Provisions take effect, regulating deepfakes and synthetic content.
NIST publishes the AI Risk Management Framework 1.0 (NIST AI 100-1), the final version of the voluntary framework for managing AI risks.
UK publishes AI regulation white paper outlining its pro-innovation, sector-led approach.
EEOC issues guidance on AI and Title VII, clarifying that employers are liable for discriminatory outcomes from AI hiring tools.
EEOC publishes detailed guidance on assessing adverse impact from AI in selection procedures.
Department of Education Office of Educational Technology publishes AI in education report.
European Parliament adopts its negotiating position on the AI Act, adding provisions for general-purpose AI models and foundation models.
NYC Local Law 144 takes effect, requiring bias audits and public disclosure for automated hiring tools.
New York City Local Law 144 takes effect, requiring bias audits for automated employment decision tools.
China’s Generative AI Service Management Measures take effect.
President Biden signs Executive Order 14110 on Safe, Secure, and Trustworthy AI — the most comprehensive federal AI action to date.
SEC proposes rules on conflicts of interest from use of predictive data analytics by broker-dealers and investment advisers.
G7 Hiroshima AI Process produces Guiding Principles and a voluntary Code of Conduct for advanced AI systems.
President Biden signs Executive Order 14110 on Safe, Secure, and Trustworthy AI, directing federal agencies to use NIST frameworks for AI risk management.
UK AI Safety Institute established to focus on frontier AI safety research and evaluation.
European Parliament and Council reach a political agreement on the final text of the AI Act after marathon trilogue negotiations.
NAIC adopts model bulletin on the use of AI systems by insurers.
ISO/IEC 42001 (AI Management System standard) published.
2024
NIST continues to develop community profiles, crosswalks to international standards, and updated guidance. State legislatures increasingly reference NIST AI frameworks in new AI legislation.
HHS finalizes updated Section 1557 rule addressing nondiscrimination in AI-driven healthcare decisions.
Colorado Division of Insurance finalizes detailed regulations on AI in insurance under SB 21-169.
European Parliament formally approves the AI Act by a vote of 523 to 46, with 49 abstentions.
Utah signs AI Policy Act (SB 149), requiring disclosure when consumers interact with generative AI.
OMB issues M-24-10, establishing governance requirements for federal agency AI use including Chief AI Officers and AI use case inventories.
EU AI Act formally adopted by the European Parliament.
Council of the European Union formally adopts the AI Act, completing the legislative process.
Colorado Governor Polis signs SB 24-205, the first comprehensive state AI law covering high-risk AI across multiple domains.
OECD AI Principles updated to address generative AI and other developments.
Connecticut signs SB 1103, requiring AI impact assessments and consumer disclosure for high-risk systems.
The AI Act is signed into law as Regulation (EU) 2024/1689 of the European Parliament and of the Council.
NIST publishes the Generative AI Profile (NIST AI 600-1), a companion resource addressing risks specific to generative AI systems including large language models and image generators.
The AI Act is published in the Official Journal of the European Union.
The AI Act enters into force, starting the phased enforcement timeline.
Illinois HB 3773 signed, amending the Illinois Human Rights Act to make it a civil rights violation to use AI for employment decisions without notice or in a discriminatory manner.
California Governor Newsom vetoes SB 1047, citing concerns about stifling AI innovation. The bill's influence persists in ongoing legislative discussions.
FTC launches Operation AI Comply with five enforcement actions against companies making false or unsubstantiated claims about AI products and services.
Council of Europe Framework Convention on AI opened for signature—the first legally binding international AI treaty.
South Korea’s National Assembly passes the Framework Act on AI Development and Establishment of a Foundation for Trustworthiness. Brazilian Senate approves AI bill PL 2338/2023.
2025
Texas signs HB 1709 establishing AI governance for state agencies. Over 1,200 AI bills introduced across all 50 states, with 145 enacted into law.
EEOC removes AI-related guidance documents from its website, including Title VII adverse impact and ADA guidance, following executive order revoking Biden’s AI EO. Underlying laws remain fully in effect.
FDA issues draft guidance on AI-enabled device software functions applying a Total Product Life Cycle (TPLC) approach with Predetermined Change Control Plans (PCCP).
Canadian Parliament prorogued; Bill C-27 (including AIDA) dies on the order paper. South Korea promulgates AI Framework Act (effective January 2026).
President Trump signs EO 14148, revoking EO 14110 and directing agencies to remove barriers to AI innovation. OMB M-24-10 is subsequently rescinded.
EU AI Act’s prohibition on unacceptable-risk AI systems takes effect. UK AI Safety Institute renamed to AI Security Institute.
Prohibited AI practices (Article 5) become enforceable. AI literacy obligations also apply from this date.
Brazilian AI bill PL 2338/2023 forwarded to Chamber of Deputies.
OMB issues M-25-21 ("Accelerating Federal Use of AI through Innovation, Governance, and Public Trust") and companion M-25-22 (AI procurement), replacing Biden-era AI memos with innovation-focused guidance.
Japan passes the AI Promotion Act (took effect June 2025; full effect September 2025)—becoming the second major APAC economy with comprehensive AI legislation.
U.S. AI Safety Institute renamed to Center for AI Standards and Innovation (CAISI) by the Trump administration. Secretary of Commerce Howard Lutnick announced the rebranding. CAISI continues to operate within NIST.
Governor Abbott signs Texas TRAIGA (HB 149), the Texas Responsible AI Governance Act, effective January 1, 2026.
GPAI Code of Practice published by the European AI Office, providing a voluntary compliance framework for general-purpose AI model providers.
Massachusetts Attorney General settles AI lending discrimination case against a student loan company over disparate impact from AI underwriting models.
Colorado Governor Polis signs SB 25B-004, delaying the Colorado AI Act (SB 24-205) effective date from February 1, 2026 to June 30, 2026.
Obligations for general-purpose AI (GPAI) model providers take effect. The governance framework for the European AI Office and national competent authorities also applies.
California Governor signs SB 53 (Transparency in Frontier AI Act), California's first comprehensive frontier AI law targeting the largest AI developers.
China’s AI Content Labeling Measures take effect alongside mandatory technical standard GB45438-2025. China issues National AI Governance Framework.
China’s top legislature passes major Cybersecurity Law amendments incorporating AI provisions (effective January 2026).
Council of Europe Framework Convention on AI enters into force—the first binding international AI treaty now in effect, ratified by the UK, France, Norway, and others.
NIST releases preliminary draft Cyber AI Profile (NIST IR 8596), providing guidance on managing AI-related cybersecurity risks aligned with the NIST Cybersecurity Framework 2.0.
Trump executive order "Ensuring a National Policy Framework for AI" targets state AI laws for potential federal preemption. DOJ AI Litigation Task Force established.
President Trump signs "Ensuring a National Policy Framework for Artificial Intelligence," establishing federal preemption of state AI laws and creating the DOJ AI Litigation Task Force.
2026
Multiple state AI laws take effect: California SB 53, AB 2013, and SB 942; Texas TRAIGA (HB 149); and others.
DOJ AI Litigation Task Force begins operations, directed to challenge state AI laws in federal court.
FDA issues guidance reducing oversight of certain low-risk AI digital health products, including AI-enabled software and wearables.
China’s Cybersecurity Law amendments take effect, bringing AI into national law. South Korea’s AI Framework Act takes full effect. Global AI regulatory activity continues to accelerate across all major jurisdictions.
Commission expected to publish guidelines on classification of high-risk AI systems.
FDA finalizes QMSR cybersecurity guidance for medical devices, reinforcing security requirements for connected AI systems.
FDA has authorized over 1,250 AI/ML-enabled medical devices; enforcement actions across sectors continue to increase.
White House releases legislative blueprint urging Congress to adopt a unified federal AI framework codifying federal preemption of state AI laws.
Colorado SB 24-205 (delayed) takes effect, marking the first enforcement date for a comprehensive state AI law.
Requirements for high-risk AI systems listed in Annex III become enforceable, including conformity assessments, risk management, data governance, and human oversight obligations.
2027
Full enforcement of all remaining provisions, including obligations for high-risk AI systems embedded in products already covered by EU harmonization legislation (Annex I).
Disclaimer: Content on AIRegReady is educational and does not constitute legal advice. Regulatory summaries are simplified for clarity and may not capture every nuance of the underlying law or guidance. Consult qualified legal counsel for specific compliance obligations. Information was accurate as of the date noted but regulations change frequently.