Global AI Governance
AI governance is expanding rapidly worldwide, but with no unified approach. Countries and international bodies are pursuing regulation at different speeds, with different philosophies—creating a fragmented landscape that companies operating across borders must navigate carefully.
Last reviewed: March 2026
Key Facts
- Key Jurisdictions
- EU, China, UK, Canada, Brazil, Japan, Singapore, Australia
- International Frameworks
- OECD AI Principles, G7 Hiroshima Process, Council of Europe Convention, ISO/IEC 42001
- Binding International Treaty
- Council of Europe Framework Convention on AI — entered into force November 2025
- Dominant Trend
- Risk-based regulation with growing enforcement
Overview: A Fragmented Global Landscape
There is no single global framework for AI regulation. Instead, countries, regional blocs, and international organizations are each developing their own approaches—some binding, some voluntary, and some still under debate. The result is a patchwork that creates real compliance challenges for companies building or deploying AI across borders.
Several broad patterns have emerged. The EU has taken a prescriptive, risk-based approach with the EU AI Act (opens in new tab), creating the most comprehensive binding AI legislation in the world. China has moved faster than any other country in implementing operational AI regulations, taking a layered approach that addresses specific AI applications one at a time. The UK has opted for a decentralized, sector-led model that relies on existing regulators rather than a single AI law. Canada, Brazil, and others are at various stages of developing their own legislative frameworks.
At the international level, the OECD AI Principles provide a widely referenced baseline, the G7 has developed a voluntary code of conduct for advanced AI, and the Council of Europe’s Framework Convention on AI—the first legally binding international AI treaty—entered into force in November 2025. Meanwhile, technical standards like ISO/IEC 42001 are emerging as a practical tool for demonstrating AI governance maturity.
For companies operating globally, the challenge is not just understanding each jurisdiction’s rules—it is managing the gaps, overlaps, and contradictions between them. There is no mutual recognition framework, no single compliance certification that satisfies all jurisdictions, and no sign that regulatory convergence is coming soon.
Canada
Canada was among the first countries to introduce comprehensive AI legislation, but the effort has stalled due to political developments.
The Artificial Intelligence and Data Act (AIDA) was introduced in June 2022 as Part 3 of Bill C-27, the Digital Charter Implementation Act. AIDA would have established a regulatory framework for “high-impact” AI systems, requiring organizations to conduct risk assessments, implement mitigation measures, maintain transparency and public disclosure, and submit to government audits.
AIDA included several notable provisions. It would have imposed criminal liability for reckless or knowing deployment of AI systems that cause serious harm, making it one of the few AI laws globally to include criminal penalties. It also would have created a new AI and Data Commissioner to oversee enforcement.
However, AIDA faced significant criticism during the legislative process. Critics argued the framework was too vague, delegated too much to future regulations, and did not adequately address fundamental rights. The bill was substantially amended in committee but never reached a final vote.
In January 2025, Parliament was prorogued, and Bill C-27—including AIDA—died on the order paper. Under the new government that followed, the future of dedicated AI legislation in Canada is uncertain. As of early 2026, no replacement bill has been introduced.
This does not mean AI is unregulated in Canada. Existing laws—including the Personal Information Protection and Electronic Documents Act (PIPEDA), the Canadian Human Rights Act, and sector-specific regulations—continue to apply to AI systems. The Office of the Privacy Commissioner has been active in investigating AI-related privacy complaints, and the Treasury Board has maintained its Directive on Automated Decision-Making for federal government use of AI, which requires algorithmic impact assessments and transparency for automated decisions affecting individuals.
- AIDA’s High-Impact System Framework
- AIDA would have required organizations deploying “high-impact” AI systems to assess whether the system could cause harm, establish mitigation measures, monitor the system for compliance, and maintain records for regulatory inspection. The definition of “high-impact” was to be specified in future regulations—a point of significant criticism.
- Criminal Liability Provisions
- AIDA proposed criminal penalties for possessing or using an AI system knowing it could cause serious physical or psychological harm, or recklessly deploying AI in a manner that causes substantial economic loss. Maximum penalties included fines and imprisonment. These provisions made AIDA unique among proposed AI laws globally.
- Current Status
- Bill C-27 died when Parliament was prorogued in January 2025. Minister of Artificial Intelligence and Digital Innovation Evan Solomon has announced the government’s intention to propose new AI legislation that would be distinct from AIDA—not a revival of the previous bill. The new legislation is expected to be tabled in 2026. In the meantime, existing privacy, human rights, and sector-specific laws remain the primary regulatory tools.
- Provincial Developments: Ontario Bill 194
- Ontario’s Bill 194, the Strengthening Cyber Security and Building Trust in the Public Sector Act, received Royal Assent in November 2024. The bill regulates AI use by Ontario public sector entities—including hospitals, schools, law enforcement, and government ministries—requiring transparency and accountability measures for automated decision systems. Bill 194 is not yet fully in effect, as final implementing regulations are still being developed. This is the most significant provincial AI governance development in Canada to date.
- Federal Privacy Reform
- A new federal private sector privacy statute and a companion tribunal bill are expected in late 2025 or early 2026. These would modernize Canada’s privacy framework and could have significant implications for AI systems that process personal information.
China
China has the most advanced operational AI regulatory framework in the world. Rather than pursuing a single comprehensive AI law, China has taken a layered, application-specific approach—issuing binding regulations targeting specific types of AI systems as they emerge. The Cyberspace Administration of China (CAC) is the primary regulator.
China’s approach is pragmatic and fast-moving. Each new regulation addresses a specific AI application that has become commercially significant, imposing obligations on providers and users within months of the regulation’s effective date. While enforcement details are still developing, the regulatory text is binding and backed by administrative penalties.
- Algorithmic Recommendation Management Provisions (March 2022)
- These provisions regulate recommendation algorithms used by internet platforms. They require providers to register their algorithms with the CAC, give users the ability to turn off personalized recommendations, prohibit the use of algorithms to engage in price discrimination, and restrict algorithmic promotion of content that could disrupt public order. China was the first country to implement binding regulations specifically targeting recommendation algorithms.
- Deep Synthesis Provisions (January 2023)
- The Deep Synthesis Provisions target deepfakes and AI-generated synthetic content including text, images, audio, and video. Providers of deep synthesis services must label AI-generated content, verify user identities, maintain logs, and prevent the creation of content that is prohibited under Chinese law. These provisions were among the first binding deepfake regulations anywhere in the world.
- Generative AI Service Management Measures (August 2023)
- The Generative AI Measures (opens in new tab) regulate generative AI services offered to the public within China. Providers must register with the CAC, conduct security assessments before launch, ensure training data is lawful and accurate, label AI-generated content, and comply with existing content moderation requirements. The measures also require providers to address user complaints about AI-generated content. The final version was softened from earlier drafts, applying primarily to public-facing generative AI services rather than all internal uses.
- AI Content Labeling Measures (September 2025)
- The Labelling Measures for AI Generated Content took effect September 1, 2025, along with the mandatory technical standard GB45438-2025. The measures require visible labels with AI symbols for chatbots, AI-written content, synthetic voices, and face generation or face-swap applications. Explicit labels must also remain embedded in any downloadable or exportable AI-generated content. These measures significantly expand China’s earlier content-labeling requirements into a comprehensive, enforceable regime.
- Cybersecurity Law (CSL) Amendments (January 2026)
- On October 28, 2025, China’s top legislature passed major amendments to the Cybersecurity Law, bringing AI into national law for the first time. The amendments support AI innovation, promote the development of training data resources and computing infrastructure, strengthen AI ethics regulation, and enhance AI risk assessment and security governance. The amendments took effect January 1, 2026, marking a significant elevation of AI governance from sector-specific regulations to national-level legislation.
- National AI Governance Framework (September 2025)
- China issued its National AI Governance Framework on September 9, 2025, providing an overarching policy document that ties together the country’s various AI-specific regulations into a cohesive governance strategy.
- Algorithm Registration and Transparency
- A distinctive feature of China’s approach is the algorithm registration system. Providers of algorithmic services—including recommendation engines, generative AI, and deep synthesis tools—must register their algorithms with the CAC, providing descriptions of the algorithm’s function, logic, and intended purpose. The CAC maintains a public registry. As of early 2026, over 100 algorithms have been registered.
United Kingdom
The UK has deliberately chosen a different path from the EU. Rather than enacting a single, comprehensive AI law, the UK is pursuing what it calls a “pro-innovation” approach that relies on existing sector regulators to apply AI-specific principles within their existing frameworks.
This approach was outlined in the March 2023 AI regulation white paper (opens in new tab), which set out five cross-sector principles that regulators should apply to AI within their domains: safety, security, and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. The government chose not to make these principles statutory, instead relying on regulators to implement them through guidance, existing rules, and enforcement priorities.
Key regulators adapting their frameworks to address AI include the Financial Conduct Authority (FCA), the Information Commissioner’s Office (ICO), the Competition and Markets Authority (CMA), Ofcom, the Medicines and Healthcare products Regulatory Agency (MHRA), and the Equality and Human Rights Commission (EHRC). Each is developing AI-specific guidance within its sector.
The AI Safety Institute (AISI), established in November 2023, was renamed to the AI Security Institute in February 2025, signaling a stronger government focus on national security risks from AI—including cyberattacks and weapons development—rather than the broader safety agenda that originally defined the institute. Critics have warned that this rebranding narrows attention away from ethics, bias, and rights-related AI risks. The institute tests advanced AI models for dangerous capabilities and works with AI developers on safety protocols. It is not a regulator and does not have enforcement powers.
The UK government’s legislative plans have also evolved. The government initially planned a short AI bill during the 2024–26 parliamentary session, but has since shifted to preparing a more comprehensive official AI Bill expected in the next parliamentary session in 2026. A Private Member’s Bill on AI was reintroduced in March 2025, but without government backing. In October 2025, the Department for Science, Innovation and Technology (DSIT) opened the AI Growth Lab consultation to gather input on AI governance, with the consultation period closing January 2, 2026.
The contrast with the EU is significant. The EU’s centralized, prescriptive approach creates uniform rules across all sectors. The UK’s approach allows sector regulators to tailor their response to AI risks specific to their domain, but risks creating gaps and inconsistencies between sectors.
For companies that operate in both the UK and EU, the practical implication is that they must comply with the EU AI Act for EU-facing activities while also navigating the UK’s sector-specific requirements—which may diverge from EU rules in both substance and timing.
International Frameworks and Standards
Several international organizations have developed AI governance frameworks that, while not always legally binding, shape national legislation and corporate practices worldwide. These frameworks represent an evolving consensus on basic AI governance principles.
- OECD AI Principles (2019, updated 2024)
- The OECD AI Principles (opens in new tab) were adopted in May 2019 and updated in May 2024 to address developments in generative AI. The principles cover inclusive growth, human-centered values, transparency, robustness and security, and accountability. They also include recommendations for governments on AI policy. Over 45 countries have endorsed the OECD AI Principles, making them the most widely adopted international AI governance framework. They have directly influenced the EU AI Act, Canada’s AIDA, and numerous other national frameworks.
- G7 Hiroshima AI Process and Code of Conduct
- During Japan’s G7 presidency in 2023, the Hiroshima AI Process produced an international Guiding Principles document and a voluntary Code of Conduct for organizations developing advanced AI systems. The Code of Conduct covers risk identification and mitigation, reporting of vulnerabilities, transparency about AI capabilities and limitations, and responsible information sharing. It is voluntary but has been endorsed by major AI companies. The G7 has continued to develop these commitments in subsequent presidencies.
- UNESCO Recommendation on the Ethics of AI (2021)
- Adopted by all 193 UNESCO member states in November 2021, this is the first global normative instrument on AI ethics. It covers proportionality, safety, fairness, sustainability, privacy, human oversight, transparency, and accountability. While not legally binding, it provides a framework that member states are encouraged to implement through national policy and legislation. UNESCO has published readiness assessments to help countries evaluate their preparedness for AI governance.
- Council of Europe Framework Convention on AI (Entered into Force November 2025)
- The Framework Convention on Artificial Intelligence (opens in new tab) is the first legally binding international treaty on AI and is now in effect. Opened for signature in September 2024, the Convention entered into force on November 1, 2025, after ratification by at least five countries including the United Kingdom, France, and Norway. The EU has also signed the Convention. It requires parties to ensure that AI systems respect human rights, democracy, and the rule of law. It covers public and private sector AI use, requires risk and impact assessments, mandates transparency and accountability, and provides for effective remedies. The Convention applies to the 46 Council of Europe member states and is open for accession by non-member countries—the United States, Canada, Japan, and others participated in its negotiation.
- ISO/IEC 42001: AI Management System Standard
- Published in December 2023, ISO/IEC 42001 (opens in new tab) is the first international management system standard specifically for AI. It provides a framework for organizations to establish, implement, maintain, and continually improve an AI management system. It covers AI risk management, data governance, transparency, and responsible AI development. Organizations can be certified against ISO/IEC 42001 by accredited certification bodies, making it a practical tool for demonstrating AI governance maturity to regulators, customers, and partners. It is expected to become increasingly important as a compliance benchmark, particularly for organizations subject to the EU AI Act.
Other Jurisdictions
Beyond the EU, China, UK, and Canada, many other countries are developing AI governance frameworks. The approaches range from comprehensive legislation to voluntary guidelines, reflecting different regulatory cultures and priorities.
| Jurisdiction | Approach | Status |
|---|---|---|
| Brazil | Comprehensive AI legislation (PL 2338/2023) modeled partly on the EU AI Act. Risk-based framework with requirements for high-risk AI systems, transparency obligations, and rights of affected persons. Establishes a national AI authority. | The Senate approved PL 2338/2023 on December 10, 2024 and forwarded it to the Chamber of Deputies on March 17, 2025. A special committee of 33 members was established in April 2025 to analyze the bill. Still requires Chamber approval and presidential signature. Expected to be the first major AI law in Latin America. |
| Japan | Japan passed the AI Promotion Act on May 28, 2025 (took effect June 4, 2025; full effect September 2025), making it the second major APAC economy with comprehensive AI legislation. The law takes an innovation-focused approach built around four core principles and establishes an AI Strategy Headquarters within the Cabinet, chaired by the Prime Minister. Notably, the Act contains no explicit penalties—relying instead on a cooperative and reputational compliance model. Japan also published AI Guidelines for Business in 2024 (updated from 2019 Social Principles of AI) and leveraged its G7 presidency in 2023 to advance the Hiroshima AI Process. | Enacted. AI Promotion Act in full effect as of September 2025. Japan is an active participant in international AI governance discussions and the second APAC jurisdiction (after South Korea) to adopt comprehensive AI legislation. |
| South Korea | South Korea’s National Assembly passed the Framework Act on AI Development and Establishment of a Foundation for Trustworthiness on December 26, 2024. Promulgated January 21, 2025, the law takes effect January 22, 2026 (one-year transition period). The Act consolidates 19 separate AI bills into a unified framework covering high-impact AI in healthcare, energy, and public services. It includes mandatory labeling for generative AI content and establishes a national AI governance committee. South Korea became the first APAC jurisdiction to adopt comprehensive AI legislation. | Enacted. Promulgated January 2025 with a one-year transition period. Takes full effect January 22, 2026. South Korea has also been active in developing AI ethics guidelines and sectoral guidance. |
| Singapore | Model AI Governance Framework (first published 2019, updated 2020) provides voluntary guidance on deploying AI responsibly. AI Verify—an AI governance testing framework and toolkit—launched in 2022 to help organizations self-assess AI systems against governance principles. | Voluntary frameworks in effect. Singapore positions itself as a pragmatic, business-friendly AI governance hub. No comprehensive binding AI legislation, but sector regulators (MAS for financial services) apply existing rules. |
| India | No dedicated AI legislation. Government has issued advisory guidelines on AI, particularly for generative AI services. The Digital India Act (still in development) may include AI-related provisions. India’s IT Act and sector-specific regulations provide some coverage. | Advisory guidelines in effect for generative AI. Broader legislative framework still under development. India has signaled a preference for a light-touch, innovation-focused approach. |
| Australia | Published voluntary AI Ethics Principles (2019) and conducted a government review of AI regulation. The “Safe and Responsible AI” consultation (2023–2024) considered mandatory guardrails for high-risk AI. Government has signaled intent to move toward binding requirements. | Voluntary principles in effect. Mandatory guardrails for high-risk AI under consideration following public consultation. Sector regulators (APRA for financial services, TGA for health) applying existing frameworks to AI. |
Cross-Border Challenges
Companies building or deploying AI across multiple jurisdictions face a set of practical challenges that no single compliance framework addresses.
Regulatory divergence is the most fundamental problem. The EU AI Act’s risk classification system does not map neatly onto China’s application-specific regulations or the UK’s sector-led approach. A system classified as “high-risk” under the EU AI Act may face completely different (or no) requirements in another jurisdiction. Conversely, a system that requires algorithm registration in China may have no equivalent obligation in Europe.
Data localization and cross-border data flows add another layer of complexity. AI systems depend on data for training, testing, and operation. Different countries impose different restrictions on where data can be stored, processed, and transferred. The EU’s GDPR, China’s Personal Information Protection Law (PIPL), and various national data localization requirements can constrain how global AI systems are built and deployed.
No mutual recognition framework exists for AI compliance. A company that fully complies with the EU AI Act cannot assume that compliance satisfies any other jurisdiction’s requirements. There is no AI equivalent of the mutual recognition agreements that exist in some areas of trade regulation. ISO/IEC 42001 certification may eventually help bridge this gap, but it is not yet widely recognized as a regulatory compliance tool.
Enforcement is uneven and unpredictable. Some jurisdictions have active, well-resourced regulators (the EU, China, the US at the sector level). Others have governance frameworks on paper but limited enforcement capacity. Companies must decide how to allocate compliance resources across jurisdictions with very different enforcement realities.
Extraterritorial reach compounds the complexity. The EU AI Act applies to any AI system placed on the EU market, regardless of where the provider is located. China’s AI regulations apply to services offered within China. A company based in one country can simultaneously be subject to the AI regulations of multiple jurisdictions with conflicting requirements.
Practical Steps for Global AI Compliance
There is no shortcut to global AI compliance, but there are strategies that make it manageable. Organizations operating across borders should build their compliance programs around the highest common denominator while maintaining the flexibility to address jurisdiction-specific requirements.
- ●Map your regulatory exposure by jurisdiction. Identify every country where your AI systems are developed, deployed, or have users. Determine which AI-specific and general laws apply in each jurisdiction. Do not assume that operating remotely exempts you from local regulation—extraterritorial reach is common.
- ●Build to the highest standard. If you operate in the EU, your EU AI Act compliance program will likely cover many requirements in other jurisdictions. Use the most demanding framework you face as your baseline and adapt for jurisdiction-specific requirements rather than building separate compliance programs from scratch.
- ●Adopt ISO/IEC 42001 as an internal governance baseline. Even if certification is not required, structuring your AI management system around ISO/IEC 42001 provides a recognized framework that maps well to regulatory requirements in multiple jurisdictions. Certification can also signal governance maturity to regulators, customers, and partners.
- ●Monitor the Council of Europe Framework Convention. Now in force as of November 2025, this first binding international AI treaty will shape AI governance in 46+ countries. Track transposition into national law in the jurisdictions where you operate, as ratifying countries must align their domestic frameworks with the Convention’s requirements.
- ●Plan for data governance across borders. AI compliance is inseparable from data compliance. Ensure your data flows—for training, testing, and production—comply with each relevant jurisdiction’s data protection and localization requirements. This may require regional data strategies and infrastructure.
- ●Engage with regulatory consultations. Many jurisdictions are still developing their AI frameworks. Participating in public consultations and industry working groups gives you early visibility into upcoming requirements and the opportunity to shape practical, workable rules.
- ●Document your governance practices thoroughly. Across jurisdictions, regulators increasingly expect organizations to demonstrate—not just assert—that they have AI governance processes in place. Maintain detailed records of risk assessments, testing, human oversight procedures, and decision-making rationale.
- ●Prepare for compliance deadlines. The EU AI Act’s obligations are phasing in through 2027. Other jurisdictions have their own timelines. Build a compliance calendar that tracks key dates across all relevant jurisdictions and allocate resources to meet them.
Timeline
OECD AI Principles adopted by 42 countries, establishing the first widely endorsed international AI governance framework.
UNESCO Recommendation on the Ethics of AI adopted by all 193 member states.
China’s Algorithmic Recommendation Management Provisions take effect—the first binding regulation targeting recommendation algorithms.
Canada introduces Bill C-27 including AIDA, proposed as the country’s first comprehensive AI law.
China’s Deep Synthesis Provisions take effect, regulating deepfakes and synthetic content.
UK publishes AI regulation white paper outlining its pro-innovation, sector-led approach.
China’s Generative AI Service Management Measures take effect.
G7 Hiroshima AI Process produces Guiding Principles and a voluntary Code of Conduct for advanced AI systems.
UK AI Safety Institute established to focus on frontier AI safety research and evaluation.
ISO/IEC 42001 (AI Management System standard) published.
EU AI Act formally adopted by the European Parliament.
OECD AI Principles updated to address generative AI and other developments.
Council of Europe Framework Convention on AI opened for signature—the first legally binding international AI treaty.
South Korea’s National Assembly passes the Framework Act on AI Development and Establishment of a Foundation for Trustworthiness. Brazilian Senate approves AI bill PL 2338/2023.
Canadian Parliament prorogued; Bill C-27 (including AIDA) dies on the order paper. South Korea promulgates AI Framework Act (effective January 2026).
EU AI Act’s prohibition on unacceptable-risk AI systems takes effect. UK AI Safety Institute renamed to AI Security Institute.
Brazilian AI bill PL 2338/2023 forwarded to Chamber of Deputies.
Japan passes the AI Promotion Act (took effect June 2025; full effect September 2025)—becoming the second major APAC economy with comprehensive AI legislation.
China’s AI Content Labeling Measures take effect alongside mandatory technical standard GB45438-2025. China issues National AI Governance Framework.
China’s top legislature passes major Cybersecurity Law amendments incorporating AI provisions (effective January 2026).
Council of Europe Framework Convention on AI enters into force—the first binding international AI treaty now in effect, ratified by the UK, France, Norway, and others.
China’s Cybersecurity Law amendments take effect, bringing AI into national law. South Korea’s AI Framework Act takes full effect. Global AI regulatory activity continues to accelerate across all major jurisdictions.
Source Documents
The most widely endorsed international AI governance framework, adopted by over 45 countries.
The UK government’s framework for AI regulation through existing sector regulators.
China’s binding regulations for generative AI services offered to the public (Chinese language).
The first legally binding international treaty on AI, covering human rights, democracy, and rule of law.
The first international management system standard for AI, providing a certifiable governance framework.
Legislative history of Canada’s proposed AI and data legislation, including AIDA.
G7 guiding principles and code of conduct for organizations developing advanced AI systems.
Singapore’s voluntary framework for responsible AI deployment.
Related Articles
EU AI Act Compliance Checklist: What to Do Before August 2026
The GPAI Code of Practice: What AI Model Providers Need to Know
The EU AI Act Risk Classification: What You Actually Need to Know
Disclaimer: Content on AIRegReady is educational and does not constitute legal advice. Regulatory summaries are simplified for clarity and may not capture every nuance of the underlying law or guidance. Consult qualified legal counsel for specific compliance obligations. Information was accurate as of the date noted but regulations change frequently.