GROUNDED AI

Part 2a – Comparing the Five Eyes

This is Part 2a of my series in AI Policies. In this post, we compare the AI policies of the Five Eyes countries (New Zealand, Australia, United States, United Kingdom and Canada).

Part 1 is located here.

I’ll also provide examples showing how the policies determine how AI is implemented. You can skip directly to them here.

Part 2b will compare the AI policies of a few other countries.

Introduction

Each of these five countries is pushing AI adoption, but they differ sharply in how much they regulate, who they put on the hook, and how fast they’re moving, which in turn shapes what “good” AI looks like on the ground.digital.nemko+5

New Zealand

New Zealand finally released its first national AI Strategy in July 2025, having been the last OECD country to do so. The strategy is explicitly OECD‑aligned, focuses on AI adoption rather than building foundational models, and aims to add around NZ$76 billion to the economy by 2038. It takes a light‑touch, principles‑based approach that leans heavily on existing laws like the Privacy Act 2020, Fair Trading Act 1986, and Companies Act 1993, backed by “Responsible AI Guidance for Businesses” and a Public Service AI Framework released in early 2025

Positives include clear guidance for SMEs, a strong emphasis on human‑centred values and transparency, and government “leading by example” via the public service framework. The strategy also recognises Treaty of Waitangi obligations and positions New Zealand as an “adopter nation,” importing proven solutions rather than trying to outspend bigger economies. The downsides are that there is still no dedicated AI Act, the country moved later than peers, and relying on generic legislation may leave gaps around high‑risk or frontier use cases.

Australia

Australia’s current policy is built around a “Safe and Responsible AI in Australia” program and a 2024 consultation on mandatory guardrails for high‑risk AI uses. The government has concluded that existing regulation is “not fit for purpose” for distinct AI risks and is moving toward a risk‑based framework where high‑risk applications (for example in healthcare) face mandatory testing, transparency, and accountability obligations, while low‑risk uses are allowed to flourish with minimal interference. This is organised around five pillars: regulatory clarity, best practice, capability building, government as exemplar, and international engagement.

In the public sector, a National Framework for the Assurance of AI in Government and a pilot AI assurance framework are being rolled out, aligned with Australia’s AI Ethics Principles. Strengths of this model are its explicit focus on risk tiers, strong attention to safety in high‑risk contexts, and concrete plans for AI assurance in government services. Weaknesses are that the legal architecture is still evolving, many obligations will be spread across amendments to numerous sectoral laws, and SMEs may find the emerging guardrails complex to interpret until detailed rules are finalised.

United States

Under President Biden, Executive Order 14110 set out an ambitious, federal‑wide framework for “safe, secure and trustworthy” AI, including safety testing requirements, an AI Safety Institute at the Department of Commerce, and Chief AI Officers across agencies. That order framed AI as both promise and peril, stressing civil rights, worker protection, and national security, and was seen as the most comprehensive US AI governance instrument to date. On 20 January 2025, President Trump revoked EO 14110 on his first day back in office, as part of a broader rescission of Biden‑era directives.

Trump’s subsequent Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” shifts the policy centre of gravity toward deregulation and global AI dominance, directing agencies to roll back AI rules seen as obstacles and to develop an AI Action Plan focused on competitiveness and infrastructure. Commentators describe this as replacing an oversight‑heavy model with a private‑sector‑led, growth‑oriented strategy, including large‑scale infrastructure initiatives. The upside is a very pro‑innovation environment for US AI companies; the downside is weaker federal emphasis on safety, transparency and rights protections, with significant uncertainty about which elements of the prior safety agenda will survive.

United Kingdom

The UK’s 2023 white paper “AI Regulation: A Pro‑Innovation Approach” deliberately avoids an EU‑style horizontal AI Act and instead sets out five cross‑sector principles—safety and robustness, transparency and explainability, fairness, accountability and governance, and contestability and redress—to be applied by existing regulators in their domains. The government’s objectives are to drive growth and prosperity, increase public trust in AI, and strengthen the UK’s global AI leadership while steering clear of “heavy‑handed legislation.” Regulators like the FCA, Bank of England, and ICO are already using this principles‑based mandate to shape sectoral guidance and supervision.

Internationally, the UK has leaned into AI safety leadership through the 2023 AI Safety Summit at Bletchley Park and the resulting Bletchley Declaration, which brought 28 countries together around shared concerns on frontier AI and commitments to risk‑based safety policies. Strengths of the UK model include flexibility, close alignment with industry, and a dedicated AI Safety Institute to test powerful models. However, devolving implementation to many regulators risks fragmentation and gaps between sectors, and the lack of a single AI statute leaves some uncertainty for cross‑cutting issues like general‑purpose models.

Canada

Canada is building a relatively dense framework that combines legislation, directives, and strategy. The Artificial Intelligence and Data Act (AIDA) focuses on “high‑impact” AI systems that significantly affect individuals’ health, safety, or rights, requiring organisations to identify and address risks of harm and bias at design time, explain intended uses and limitations, and implement ongoing monitoring and mitigation. It aims to protect Canadians from AI‑related harms while promoting innovation, transparency, accountability, and international market access.

For the federal public sector, the Directive on Automated Decision‑Making requires departments to complete an Algorithmic Impact Assessment (AIA) before deploying automated decision systems, classify impact levels, and apply graduated mitigation, transparency, and human‑in‑the‑loop requirements. Building on that, Canada launched an AI Strategy for the Federal Public Service 2025‑2027, which mandates risk assessments, disclosure of AI use, citizen feedback channels, and new oversight structures to ensure ethical deployment in government. Canada has also created an AI Strategy Task Force to shape a broader national framework by the end of 2025, with themes spanning research talent, AI adoption, commercialisation, safe systems and public trust, skills, infrastructure, and security. The strengths of this ecosystem are its early, concrete tooling for risk assessment and a clear focus on high‑impact systems; its challenges include significant complexity and the fact that key elements (like AIDA) are still being finalised.

Use case 1: AI triage in public healthcare

Imagine an AI system that triages patients in hospital emergency departments, prioritising them based on symptoms, history and risk scores.

  • New Zealand – Under the AI Strategy, this would be treated as a high‑stakes adoption project inside existing health, privacy and safety laws rather than under an AI‑specific statute. The Public Service AI Framework and guidance would push agencies to emphasise human oversight, fairness, and transparency to patients, but the absence of a dedicated AI risk classification regime means individual DHBs and regulators would need to interpret “high‑risk” largely through sectoral rules and OECD‑style principles.
  • Australia – The proposed mandatory guardrails specifically call out high‑risk settings like healthcare, so a triage system would likely trigger obligations for rigorous testing, impact assessment, transparency to patients, and clear accountability if harm occurs. In parallel, the National Framework for AI assurance in government would shape procurement and assurance, making it harder for agencies to deploy such a system without structured risk assessment and governance documentation.
  • United States – Under the Biden EO, a federal hospital system would have faced strong pressure to adopt standardised safety evaluations and civil‑rights‑sensitive design; after its revocation and replacement with the more deregulatory Trump AI orders, the emphasis shifts to whatever sectoral health, privacy and anti‑discrimination laws already apply. Providers and vendors may find it easier to innovate rapidly, but there is less centralised federal guidance on AI‑specific safety testing and disclosure, so practices may diverge widely between systems and states.
  • United Kingdom – Healthcare regulators (for example, the Medicines and Healthcare products Regulatory Agency and the Care Quality Commission) would interpret the five AI principles—safety, fairness, transparency, accountability and contestability—for triage systems, layering them over existing clinical safety regimes. The UK’s AI Safety Institute and its role in testing frontier models add extra scrutiny if the triage relies on powerful foundation models, but there is still no single horizontal AI law dictating risk tiers.
  • Canada – In Canada, a national triage system would almost certainly be classed as “high‑impact” under AIDA, triggering requirements for risk and bias assessments, clear documentation of limitations, and continuous monitoring. In the federal sphere, departments would also have to complete an Algorithmic Impact Assessment, publish results, and apply the Directive’s higher bar for transparency, human oversight, and recourse options at the appropriate impact level, making healthcare triage one of the most tightly governed AI uses in the Canadian public sector.

Use case 2: Generative AI assistant in banking

Now consider a large retail bank deploying a generative‑AI assistant to support customer service, draft communications, and help staff assess loan applications.

  • New Zealand – The AI Strategy’s “adopter nation” framing, combined with Responsible AI guidance, encourages banks to adopt proven commercial GenAI tools while ensuring they comply with privacy, consumer, and fair trading law. There is wide room to experiment with productivity‑oriented assistants, but less prescriptive detail on AI‑specific credit‑risk or fairness testing than in more heavily regulated jurisdictions, so banks must build their own internal guardrails on top of generic obligations.
  • Australia – Because most customer‑service chat and drafting tools would be classified as relatively low risk, banks can roll them out quickly, provided they follow best‑practice guidance and existing financial‑services rules. However, as soon as the assistant meaningfully influences loan decisions, it is likely to fall into high‑risk territory under the emerging guardrail framework, requiring clear testing, documentation of model behaviour, and board‑level accountability to satisfy both financial and AI oversight expectations.
  • United States – The Trump administration’s emphasis on removing “barriers” and sustaining US AI dominance gives large US banks strong incentives to push GenAI deep into operations, with fewer new, AI‑specific federal constraints beyond existing banking and consumer‑protection rules. Vendors may no longer face centralised reporting mandates like those in EO 14110, so adoption will be shaped more by supervisors’ expectations, litigation risk, and reputational concerns than by a dedicated federal AI safety regime.
  • United Kingdom – The FCA and Bank of England are already exploring how to apply the UK’s pro‑innovation principles to AI in financial services, including concerns about model risk management, data protection, and fairness. A GenAI assistant that touches credit decisions would be expected to be explainable, auditable, and contestable, and firms would need to show regulators how they meet the five principles while still innovating, creating a relatively clear but principles‑heavy compliance conversation.
  • Canada – In Canada, if the assistant influences lending outcomes, it may be treated as a high‑impact system under AIDA, requiring robust risk and bias assessments, documentation and monitoring by the bank. For any federal‑regulated aspects—such as agencies using similar tools for benefits or loan programmes—the Directive on Automated Decision‑Making and AI Strategy for the Federal Public Service would demand a formal Algorithmic Impact Assessment, public disclosure, and mechanisms for customers to challenge AI‑influenced decisions.

Policy snapshot table

CountryCore AI objectivesKey strengthsMain concerns or gaps
New ZealandOECD‑aligned, light‑touch, adoption‑focused strategy to boost growth and trust.Clear guidance for SMEs; public sector AI framework; focus on human‑centred, Treaty‑aware adoption.Late mover; no dedicated AI Act; high‑risk uses rely on generic laws and principles.
AustraliaRisk‑based guardrails, protect from harms in high‑risk uses while enabling low‑risk AI.Emerging mandatory guardrails; national AI assurance framework for government.Framework still in flux; complexity of spreading obligations across many laws.
United StatesSustain and enhance US AI dominance via deregulation and private‑sector‑led growth.Strong innovation incentives and investment; limited new federal constraints on industry.Reduced federal focus on AI‑specific safety and rights; uncertainty about long‑term governance path.
United KingdomPro‑innovation, principles‑based regulation implemented by existing regulators.Flexible, sector‑specific oversight; international safety leadership via Bletchley Declaration and AI Safety Institute.Potential fragmentation across regulators; no single horizontal AI statute.
CanadaBalanced regime: AIDA for high‑impact AI plus strong public‑sector AI governance.Mature algorithmic impact assessments; detailed public‑sector rules; clear focus on harms and bias.Regime is relatively complex; key legislative pieces still being implemented and refined.

Leave a comment

Navigation