GROUNDED AI

Part 1 – Why (AI) Policy Matters

This is Part 1 of my series about AI policies.

Yes, it is really exciting stuff. Said no one ever.

Only, your country’s AI policy has a direct impact on how you build and use AI agents. Even more so, the company you work for has a greater impact.

In Part 1 of this series we’ll understand what government policies are and how they contrast to company policies.

In Part 2 I’ll explore the AI policies of a few randomly selected countries. We’ll also delve into some AI policies specific to a randomly selected company in each country.

In Part 3 I’ll provide more real world use-cases and contrast how these policies could influence how AI agents are built.

AI in the Real World: Policies That Shape Your Digital Future

Artificial intelligence is reshaping everything from job applications to hospital wait times, but the real action happens behind the scenes – in policies. Whether set by governments or companies, these rules decide if AI helps or harms everyday people like you and me. In this post, we’ll unpack what policies are, why they matter, how they’re made, and what they aim to achieve. By the end, you’ll see why staying informed isn’t just for experts – it’s for anyone who uses a smartphone or applies for a loan.

What policies are and why they matter

Policies are deliberate guides that answer “what we’ll do and why” about big challenges. Policies are not laws, but they shape laws, programs, and daily operations. This goes for government and company policies.

Governments create policies for society-wide issues like national security or public health. Companies craft internal ones to run their businesses smoothly and safely.

Government policies come from parliaments, cabinets, or agencies and focus on the public good. Think of New Zealand’s Public Service AI Framework, which sets principles for safe AI use in government services. These are broad blueprints: strategies, regulations, or funding plans that steer the economy, protect rights, and deliver services.

Company policies, by contrast, are internal rules for employees and operations. A firm might ban using public AI tools like ChatGPT for customer data to avoid leaks. They’re narrower, enforcing ethics and efficiency within one organization.

The main difference between them? Governments wield legal power; companies focus on survival and profits. But they overlap constantly. Governments often mandate company policies – for instance, privacy laws require businesses to have corresponding privacy and data-handling policies. In other instances, government policies are voluntary, such as New Zealand’s AI policy, which is a light-touch framework that encourage companies to adopt similar principles, thereby creating alignment without heavy mandates.

Take AI in hiring: A government policy (like Canada’s AIDA for high-risk systems) might require bias audits on recruitment algorithms. A retailer then builds an internal policy mandating human review of AI shortlists, plus regular audit of the AI training data.

Or consider healthcare: Australia’s National AI Plan commits significant funding – such as nearly $30 million from the Medical Research Future Fund (MRFF) awarded in 2024 under initiatives aligned with the 2025 Plan – to develop AI tools for diagnostics, including skin cancer detection via 3D imaging and cardiac risk prediction. This supports trials at public hospitals (e.g., University of Queensland’s $3 million melanoma project across regional sites like Mildura Base Hospital), speeding up early diagnoses by analysing vast imaging datasets faster than humans alone, potentially boosting detection rates by 20-30%. Hospitals, operating as companies or public entities, also implement internal policies (like data anonymization protocols and human oversight requirements) to comply with national privacy standards, ensuring patient records aren’t exposed during AI processing.

Policies have a direct impact on you and me. Good ones make life better – shorter queues at Work and Income, unbiased loan approvals, smarter traffic lights reducing commutes. Government AI policies can cut public service costs by 20-30% through automation, freeing money for schools or roads. Company policies ensure that when you bank online or shop, AI doesn’t expose your details.

But weak policies hurt. Without government oversight, company AI might discriminate – denying rentals to certain suburbs based on flawed data. Loose rules amplify biases, erode privacy (your shopping history predicting insurance rates), or displace jobs without retraining support. In 2025, a US bank’s AI denied claims unfairly until regulators stepped in. For everyday folks, policies decide if AI is a helpful tool or a black box calling the shots. Ignoring them means living with someone else’s choices on your wallet, work, and rights.

How are AI policies created? What are the general objectives that an AI policy must achieve?

Creating AI policies is a mix of research, consultation, and iteration – reactive to scandals, proactive for opportunities. Neither governments nor companies start from scratch; they build on global standards like OECD AI Principles.

Government AI policies typically follow a structured path. First, agencies scan risks and benefits – e.g., NZ’s Department of Internal Affairs reviewed public service AI use post-ChatGPT boom. Then, consultations gather input from businesses, experts, and citizens (like submissions on NZ’s 2025 AI Framework). Politicians approve, often tying to budgets, e.g. Australia’s 2025 National AI Plan pledged infrastructure funding. Rollout includes guidance (UK’s AI Playbook) or laws (Canada’s AIDA). Updates happen often, adapting to keep pace with enhancements to the technology.

Company AI policies are faster and internal. Leadership flags needs, say, after an employee shares confidential code via AI. Legal/ethics teams draft rules, often using templates from Deloitte or Microsoft. Staff training rolls it out, with audits for compliance. In NZ tech firms, policies emerged post-2024 MBIE guidance, aligning with government to attract talent.

Both levels chase core objectives, balancing innovation with safety. Here’s what every solid AI policy must hit:

  1. Safety and Robustness: Ensure AI doesn’t fail catastrophically. Governments mandate testing (US EO 14179 stresses infrastructure security); companies require “human-in-the-loop” for decisions like approvals. Objective: Minimize errors, like AI misdiagnosing X-rays.
  2. Fairness and Non-Discrimination: Prevent bias harming groups. Canada’s AIDA targets high-impact systems; company policies ban unvetted datasets. Example: UK’s principles require contestability—letting you challenge AI loan denials.
  3. Transparency and Accountability: Explain AI decisions. Governments push public registers (Australia’s AI Safety Institute); companies log usage. Objective: Build trust—you know why an AI rejected your job app.
  4. Privacy and Data Governance: Protect personal info. Aligns with laws like NZ’s Privacy Act; company rules forbid feeding customer data into public LLMs. Objective: Prevent breaches eroding public faith.
  5. Economic and Societal Benefit: Drive growth without fallout. Governments fund compute/skills (NZ’s strategy boosts productivity); companies guide ethical use to innovate. Objective: Jobs created outpace losses, with retraining.
  6. Human-Centric Design: Keep people in control. Both levels demand oversight—NZ’s framework emphasizes Māori data sovereignty; firms train staff on limits.

Examples illustrate. The US revoked “woke” barriers in 2025 for dominance, hitting safety/security objectives. OpenAI’s self-policies (Preparedness Framework) mirror this but flex for competition.

Weak policies miss objectives – China’s lax rules sparked 2025 deepfake scandals; firms without them faced lawsuits. Strong ones, like Europe’s AI Act (influencing Five Eyes nations), achieve balance: innovation thrives, risks shrink.

Policies aren’t perfect—governments move slow, companies chase profits – but they evolve. In NZ, light-touch means agility; elsewhere, mandates enforce teeth.

Wrapping Up: Your Role in the AI Policy Game

AI policies aren’t abstract – they’re the rules deciding if tech serves you or surveils you. Governments set the field; companies play within it, together affecting your job security, privacy, and services. From Wellington’s frameworks to Washington’s deregulation, understanding them empowers everyday people to demand better.

Watch consultations, vote on tech platforms, and ask your employer about their AI rules. What’s one policy change you’d push? Share below. Next post: Country-by-country AI breakdowns.

Like what you read? Subscribe for grounded AI insights that matter.

One response to “Part 1 – Why (AI) Policy Matters”

Leave a reply to Part 2a – Comparing the Five Eyes – GROUNDED AI Cancel reply

Navigation